var/home/core/zuul-output/0000755000175000017500000000000015137413700014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137424516015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000327364015137424340020270 0ustar corecore(~ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD .I_翪|mvşo#oVݏKf+ovpZjC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*XCF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) *ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&c(޸0"$5ڪҾη*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4UQviYDZh$#*)e\W$IAT;s0Gp}=9ڠedۜ+EaH#QtDV:?7#w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?~=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@4ޙ-did˥]5]5᪩QJlyIPEQZlY=!|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I- E- ZF5%X ?A?gU3U;,ד1v6s푻jÝ$ >8ȕ$eZ1q}lrCy u)xF$Z83Ec罋}[εUX%>}< ݳln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd=d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^OHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PW矬3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!6%d+, Z`ͲH-nမ^WbPFtOfD]c9\w;ea~~{:Vm >|WAꞭi`HbIãE{%&4]Ig Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_918]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ?x7F ѡZ af#rjcl ^2B│x@Bq"M/lja\b݃af LnU*P(8W[U6WX ZoѶSH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892s;=^ۭ7h9 lr_qSq-XbsK، JBJbeOfOO}}<ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&V"J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ寚;}2Q14S`XPL`-$G{3TUFp/:4TƳ5[۲yzlW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?x]>uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òI3Z=pQVΖj?+ױV}F#N9ǤWQÝiVDc5M!j1ڶù/Ok Ƴ=x?ZI,e,X Q"SoUG6 !ȴ ,!NB e^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<  yH QveJ=WhwS]֫l"]Јzg6eze;\Mdv!E]?CLC4ʍ@1Ssc;l?ߨG~oB(ъ{zZJ }z&OF wkߓG9!1u8^drKcJBxF&+62,b.-Z*qqdX>$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ GnjMU.APf\M*t*vg]xo{:l[n=`smFQµtxl _d2J0BLzv8D<%P\MUfN$68X8ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:= 5k6JvUL*t*\!j=Ã˼)"޶*̈́\)F^jH Gr%ie A>;^u'}8H0]+ES,n?UU{ x~ʓOy_>??~I&9s$ $"+ 쩹& h'|?1 ؚ~1%dk􂗡Ƭd 8AIڲhn?le\ZO1O`E;\9n@VFB0℃= OWv/XV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKd?{ƱmPQ&sľ$~s% _z$- $B*hv?~".zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP_O1FPba_odI4K%ILM6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3+=ͪO_sYqq艻*vzh5~Yy;,DiYTP;~l./~^.6+zZFD& m@WXe{ka 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/V/^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў=Zm1_tH[A$lVE%BDI yȒv $FO{axr Y#%b Hw)j4&hCUO_8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNG\Q.pPO @:Sg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨwtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>Eü%mWO[7x}~ے4WYr9Ts] AA$ұ}21;sbUwRoK #}u'tRx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)UJ7C~pۊ ~;ɰ@oՙ.rT?m0:;}d8 ݨW>.[Vhi̒;̥8$W!p.yu~9x۾vS;kN?WƟ+fx3Su[QqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3qM]j 8E!@1vցP7~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮ/?' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmV?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M}{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>SHrC[KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrIA*qQN|Ix;I}&ݢ6ɢ}{]x}_o>Mp' *=]Q$b =ݧwml"Ms>\΋"?|NKfֱn !d҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA|?m$Y/%ؼ38qv`nJ<-SER&Ċ,v]X]]Uaw UXi2Dƪ0VV#XmJ,ƪ$*N=cU˵0~ƪa;2򰐷Ʀ=1VE)d$2h#Rb<(ͭ㬸=I.A!R$2V)أ`J y NՌ1Vu6F*֥TA DD'«_GW;1>r,|_қ8 oLTiV,F/i{y5^c>«qrBDX%. d|O OINIѩ:7ݗS]34\n9K_uSj/+فnKeqei\ݝq渾,tI#WMх/4Psa#&i 75ퟅ0.yX%Kr樖XvNWɗisΠW|YɅ8?*Ĉ&0%_w5-l c0}O/I?_4u]rm^ǧd)_'SPCL0TW1<_S 77-<" \< EUJ^)sM mSȐ[w |6jfڹfֹTWc_yP4N* `+b }ohۖۿBh?]8<^KvXtU6mS%DKkT 8[K#d![^kՅ"wGBѨL/#u+Og$2 ʙhz}IayV< wXT-d@%"EXu~"/-3Pl{ ~g%nisQf3mqݔ9: 0TM `08uQ3?[Dǣ8<3 ܮst>L#,#PD*(y4¤920A/y.,uП췈/&r͚Td"4E^<)0K)^M IsbbVONPOʥ֤Ȥ__Gj0OPXeYf_J^\)k]aoy:i>YLZxSAZqoI,#ETyu乘4('x_b:F 0q1IP_&Q3K4Vh]`BpK. . oy_8(ly!A#Z&7"Qn;Qj}SM5 $Gm<_Gu]kR%3/zo1o"\,⸖ m;RyZeg d;C9nt4MBxahCc]7ܿUZrTe5mQ|NxtZ' rFwgga &e4w2]0CԆ Y%IQm"s#ȣnOGپ4n' ff[xלLIj.긮^Q pzo"Z)fMeͳiGY^m9my)O+)(" ɏ}A ˆR.NM.i}A$ؗ 8 !qw_Md 9/M*a5: YS\ut\/ {?X܋dGx)avPY]~pJؚ/4 7) e.y|S0$0^ǪIxU f^yvbcUecث}e}NR^YH\4X٥1oysi+}Ox|zxd9 pY{-. ˨֥5&٘`fh8!<9+ eţoe pś4-0뜄GOks+lz|< X@BӗsՔSv91X_>=Q.:VvýԢ(BI2E]2n&8 J(nK1ˢ}EMqX0(F{nhALK- QA(l?HS4@No@8E.W:`&n=v_OiFP{S{"r홟fO} 3z&~?b}( E}x"0Q>e3#MA8?(oxgp'č8LE47\p@8n 5\ 8kOhiM\m &nS /o]5`C@LAHZ,`A)K4C}я8NCgːdCi]4@l]gyp xeQ3voBګbĊ-CFR>T,`f xDL հN[<=#'9֒Ȫїu`#T;t3WTʜ'EؼNLYV_xo.;F2Waa0[ۂvY+>m[i%\$i.2  `;9☞'$aVlVmևbq3eKލm#Ԗ,)Q=L+9%J&SkDsa<6w)˴*䅔zv!ЊGP͕mv_0m5µ8, KL^ []ӊ+^캊hmqXMRivY[uU nW|}ZSDx^g |q?,JXQkj.ۺ]i;m`JUC++yd*۾YD`)6Eޓr[x3uS.U94Le'yֺκ7+rD7[mEX\yյ^Ul}@wº"Yp)$u,uO@R͊a1?-m!L"y`(E3d4v0FeDSNRu6Zem[/AD]+GifX3ft؇1ŘnCpkd] S\m,g4 ~FԴz//^i$woJwzk1$-Cq/(.TC++jQ?Ir(kO/kɨRXPK"uV臂u6?x{k(~pl+-\3׺nkXRӸ֝@t \I;sa;o\84I!4Ĵ\CP78 7v7!fuRk&PܒMm(6:[ׯӹJW[Z mbڡ:Iz&du=! \EgnO!:.?qc:o`Ft^iA5)[1K"ɟfUA]C6INAຬf w}LwĊ=;DktZDO.597֋n8;ҷ::6حc{{;>~(fm:`ON st|Iaa?t=b/==sO@~ (#T}_GcHBٓf@?Eݰgz!c  m6.c'F$aĕ" 0Mۛ0vyj : 2_ w'goh}`ni=P&$ OC;Y)7M$1n7 &xmg2%)D*:\mc{J k'8v&0(Hx~=gFJ:bEky9/ݦn@./Oaaikb8Fyrͣf=dSA+ |F6Ņe.+--Il* 'Pٻo'B++F_/6D*XbgU]Q]EO:>ŤgiLHz c__ȸX|G#~N1˓, }$g,\#jJ:?ZU]N*őIT& ^,f wC%[r8&QU7OidNĨGNrq?0nVPERm1kx,A~x{t`>;pt!x~xãwru ӱu c8>~q.pxozvc/R(ڷ`A?<RK@ ~xѰud.B~?oq/&`4%/^4<;tE >9J `<%LUzV?J?tD# o@e"{,Xj۾e}R BvdćN/츀}7Pݿ艽hx(v X4\q ӷ89agA8nMb%w`{ O޳ (4⛺r۷n=~ |N ƑipB aޞ]E7z׸tPp(i 8H: L OyM: ~_bYh!T@a`>Ѱt#0:t#x8I`qSB8~1M%g p~A(ܞDٖat>Yo/^볖q#0)^|xl˶;af^0~ؿFƃJ#(ḋ Ṇ=L ^zn`ȽWmw4Xw#аvwy o. "Wr,o]3%C@jL{8@^?bU"k9 0x??` ܽ6ؖe"@^=5hh5: 8(_) s/7nE$ Gÿe}K>`[:zHj-A%e>`pk0htOXtqG0AbNtAg9WB܌FnƧ:rE~_\ݩ3?FuB|0a /}g[A PN HЫ$@.50@Du`y I`#uݯ :^ZOlCЀ\5&) i!YEU4iZ͡uԙ`\D#)l%ȮmY%b\ $7!Tsw|EB** :h/֕*U~/=EYIe1%6* Tٗ݌X:Sc;JJet R٩JN;:⮒UT:H-lE\0Y!XaFLm8R3($GPLs/ B^ou-bf1״8a!t/ !^-K(UqLL 97ctM_ 炘zE-jo~!3U~pUb12́N;rt׾<}f"/8iih"=0#Q( <.i]h4p$r}ӎˀ&B4 y]Mi np,sx~Q,VY !2b Tm۟HWek 3\-k%im}%lBߋ24'̀q%nl\DM> --X7V7d7_|s3|ZM+_װZß"a{+d4ʟ<[m <=&#t`?tх_Gqpd%z;MEy!1/Cv_ॸ}є^gӓ|\;T(g g:' INPA twPp[Uo`ES׳1uF>'yyq ieyjZNm?~74jy)czŬ=̶=VNN+yDם| pZQf힉v(.M'!6ACxWMEƙ_Q[c{σ}zSE*JII / 9.#8ᷦPJ(݂P=y `H6cJwTQ+j\V#>wlwmFR˶ mO+{lGJګ[joO@Ũ9Eq-I(]{p~DkT}%Wuͮl"-^("bu90POs/?^Vi?ʬ@ hp!h̷0b\]8c 9~X.CT͑jOi`2K4D_aUAf GhE|!xNF0CeEgT4FhNQ;$U IgPIG-!5OT2kJN/po> y\08`Į8l+V2,zӓ[x UMr.LlͶ94pz5K+Dz{ т M¬YT$gYR˦l vhq17"5bjY-Y >s,[șםU)((Of0jn']۲ULH/AU7%|尗r/EuUҔ([zJkaK}Cez;* Y8[:J5~TI]Sͅ_*(6sgC/A1MʲDxk7MP;#5aᑱ@"x6ps4fJ$} RB4é f7J$L%xk"# Wφ_oFst)/z ȠN߶q1>N!9 t7L5+v'=2O"޷3Qkbro5,ҵ X: FD`Md+wX|o>p$tWA,SLxscY,렛=4k~fK68&7QQ<OdY:˙r;kY\̟)H" \u|h{a'9{o@Z>I|8+|ΪS'},GL88JRY&FZd/B[Y6Ej,)kh4m`(cߛ AB,YHI覨榐 N|\ҸԹ "'*Ӄu8}.P8tEoO@e4уІ Bmpv-56|4&NN][Ha Ŀ8(uV? X&dMqA[|y"" RAZȧAgV:e%XP] B\1&ō/+9rw /]l1f{S5-j4p%aC~H,&%5]uw"TDڎȳx^$f[e tmfE.bR5zX'w1|x\P&U]5m +Y0\6cM]A~hV|#LY&W2P0I`4vRVt*# Ri>xX͗'si!LmsiPF,&7ρM# ]X`{B.ڱ0I@,Ҁ6TI-AL  Vi|Aj' G2)xI3o$8B"(X?.8[9ǚ0o \Xm""ELH&1_jVBO(1k>:bp9 5^/>P$u\mƞ~2fpkA 52(q1p59}s(A{.z=R-ά(PLTPAT*)#w쫇k[$h3ݪtc#PL ƺA*@q^ ³PFc$}LSW zY 1 ķ>.8Z1GLEK( _TYcMpH:s ]yBo`k*@bf51:H:^m聣r,Yi-1U"gp1,gimR>("I'8ޥϵzl(8N( %F*# POŚף죓H,Z;RGp̠;(AB"52aOL)bďݢBP좑\Ĉg4 XX.mJv1p}솛|0 `͠NhU0(In1L,xRW+}Ɏ`,mE >8484DڅzAU$ n^n%&K1F@-I%%$DKtsM jlx2grW xne) QD Ri8R$dSX7"jԢ(<9En)711m 7wXi:+3R@˱msR%68X#sV8'"HԵ'^T |LH)pM ] 6cˀýa<5klmgffĻ%:<[T6PNVeu~fR۸ysx_7-1F L[bf_#5 qL3SI'8' rl9SՊ!8a/-eh .QNp ,!.ݿ4<;ۘzRij}ew +<+F> s 81v<з]ɍXh} KGij;ބ//߅kRV۷,7cTd3-&?]=wf/IwvÂ}㻶}fm/\5{K-ۂ6Y^xv6/~u/ſ4z?"]^^ԋ?W\NnoP 7u 1mLx0Ο0]j#!?47nobz&򫿱-IOW2OaIW0^@wJ!Llةtӡߦ3~c|ݟYE22ےgHK&l%ݗ.~u WQ6W-(]1ձѣϢ88.֧\ק+/JYa|j9ʹ.3]fevnu򨉤ɠ'D s kYD 5lD8L&HtfpLm 5W y1)zuO5xꠒ7 ,B Qm"D8r>֐sVyc0S 瘥O%!7pH<\ӯsb"vkWŅ6$Ąp0K #`GBoowv?bprgcdJz ΁V F-g8($wR NEpAAou"'wD/ `?KWBZx9d:РfQG#&MotJŢ( fւޗ$R 9oڕÏR<3o*9C0 YMDK{.>vGF3_<k% ۔J)L\@045KI'~\.88ZLzZucBïsP?P~ܭæˮwZ{\}>ۧFF96(N1 904M A5kU (0{`wfpT}.;޺I$=l#ˢ>=jQY}Ѵt# 5ݔj|x6H@IOzv=.;GFwq.%5p .GMCLq Yh;C1ra;gGӁC @b;uP1)m!IZm>OL$MsR.k]rk{erQx`vTO;xx0z^%Mns*{h3gVFö\SuRWxGX #`z* " n0YR}%,d&gsâ[Jw4r]tnp\"CHakos.])d_ѓfVw!Im+WfvYcqOrPsDhp;xn4HECs"4wIszBQ4zSi47]V;OO.Qf0Ih 2&piIysg4e6^Ucn8@snKko:(ӧAKUUd(%fh>.8JZ eVZI*\IfVN+[)NWM킣3!QXOsB|LGF.;/H]\f˚Qo!ԶޚssڄMZkkQ³_GsU 3+ w Gq_;FL(N چEMp -g4Ra~l̲6]y1I =oѐԪoJgG %eMEm > ]s&0C|5F}H8(2R=Fx@SUպ@u{ { 9c] t͡bn[K LX( m Li#w"xNa7w8x)3>:X[(!511"b̠Dގ4/x֨M*HtnfGE|}܀G`.4%[?\6NI{C^M-67q/q }zձ 3vm{i rg'h(InM"U4Q S6b'c, eEL7"4}wsWf]Q_c#.pGܦ;[|v큫c;1HnuV'S3:bWE}`|$+RZ@\rB7 2>8xܕ h} ьbQ<@ h^xEt˷_:_T'͌Iӳ2m)!ֻbWu)fC ӿsM2y^no]wKB٤헻5+l.5[E)s6wq+|pi_T>Sh7O ̸ܹ?/WS|ꚉ=W/HK)W/v2PnS ꗨ-;#רí Yϕ=c5>( }~ԟ͠o?#lPws,9n,Tו(eq3&Y,3x6E'D"ވ-D#d09%XZzo{!(>ghs6EUpr`B_t&g0Oxg}Ͽ'st<;%| )x%:;z[[zs7Z3]/>әC Jj+/]i ;wU -PM'rR^KjߟO˿,E߃lϽnHzʞ`/sOgeߟy A؟]p6d/h#?H8nT }g eKn r7 .ծ|oqo\X1TWƬ%M07"rXn'-G;l5,7 aH1h^BF./;QdO?(vd'm?u@*ݟ6//v{_qj/GG'3E  $1R&D$QNhgULuw1zj;M?:>[lAҫIr|ɟ-Sբɣ)V+uy:fЍ- QHYLp J,z< ɆƟ}}ax/E'uKN9tB'[QW>^߉=Zl_H`H/H>V0#`_7" z@!ԗ? 4"S=B &ުcyu@m;}UswDAa݃+TuW)n@% $ǵ&p`DdP렩-qEζym]Y3;;_}EW_|l}%n7!sa}0S*bhZ2ݦ2 0oѥ^\њuw\0>i u/nxl]y!^PkC70o,1bw[?to&jS<88-ڱpՎR18VBz^n|]p>cíA |7q=1v&ӳl1?| _|| )O(IEdND`IRTJQOeۀ0£_|GQy#FM;Er!Qn6R_hvNsO49flJa9Ar1ON9JFgKSDr\̐R*qehQ#hy*pNC̀ sP8TajmΦ) , g4M;jM;f(PFɎF36%^CQ4s<9kLŒuI IpMKR"&ZY3vH3v| H36m 5©M! aBcNT!vSt2ӫf4f4١eDs+3-ZO]4p&A+ }wePǏ<'sޗ&0+">~7<(]|vQ| 禟ࢃD|w^s%Yi6Y] UͰ_}Db&IQg. bhʬn/)_h1.U)I|BJ.TGR>?G&=Dt<|`}h20#6_ƂWt\r/f /hl>;6SZ VVO`VO\VKZhթD/,C?5 #"܆'އͥig$Rb{Pe*h6M7BS sVݔh\W#@5e@r |kjW[>e 7-Ti6~ HUHl;]dd9$BD3.^*f뵊{}zd赊r:5N|~uMpo_~ZosZ>}q^,6 WXŏ „+DGM^V?i M$P:bp+ԡRX8ZO\j%BELHC,L} 2kQS&4sA1#T &dcbOD S,<*E0uߠpFshK!)#qlǞ+٪dib#߀Hf F7 ^ t3"8[ọeC6XHOk$M2q&Nc5k#J+bsxK|b-La$2A7&5b>lqI{O%ƺUJ+4-RZ95U$8\o\u_6לJB2 a9%$9F"M0i\b KB%l+x:Ōs&/m@c 2CTP&JS4Z[R'HbcD.Neg/DbR *n -&D‚j!g`!B4@KGJ IRbA: 7ιQq&M5H',nX}Uv 2";VSZa3L&E0EF~#JbhD"I9}Q 61T `$~?{۶ an(!lvn7 6 r8EܸE=3%![#ʦ܈f~̙9sFc v&͉EhA;j hSL8Aw{kXSI0529"KX"ˠmBALI.JlItb;՜S) G5q8c&Ź F-J 4 4 Ovdj!f6dsj(e4!4ϰ̘" M _T0Dcn4Zt&$Kd4wOZaVi-*o<f# 'qYGl+rΤIs<:;ͫL'L}ФCSs!@e()BT"tJؒ\R/2es+8gDkKɡ !_#Cәc]tGH6]I`>_ny%]!xcGY vAӤ0\dhj 5u^}~<,Ϗzʾ4Qu0G o5rg[ }{8Z7R:ǘDXL0 1b` ؐN@7R}W 67Iհg-RZ acmP*RF2F/JrbRcIGiOSq㩣0 ~@\G}d4u"+¥ʨ'ƪ$˸Ie 'P1={Wu0L%S6q||R^0VlQN%%xy⚟.0C^1Ȳegg>QĀ,s<,0.EÂ+&G J4sb.bbH-H rk!i,^(^aDwaU&JrIƅ5XrZJvO1];#p.|іԹqAP;^-%b8_.n5aF(zww*uc,bs}z<( e5ry.(`6Hgj7ڹ/]񳿢Z^Q~C ҹi?j9ajqy X/!|QB7o7oO \Ѿ=0, 6ɕR`vBk6)Fr@Vr0M pW1%Z6Z 0ۮ QL~# c?`xޞng8~TUe.$pSY ނ3շ #1(҄ 8һ/upe-K.j,"q_qkL%DwHap)cI%2BtXiE&e">rY[ᙽ0#J -0{5,R2mh*wi;_8[+sO74s݁OqRN]k.g818z:3YU26;I7:qZތM[5$]p8Y֡&Xꬑj`CAliԚ}}r|\p¨ϊ$sH\^s̺=)f:}xVd'0ΧnTڳba G8_ K͙:Bָu.Kuv[ b}ʀ>]?5g=< KƤp9/-Mb{ԯoV- oEy NN{]Fۄ+/tsn3X{+J?1xt֫2N2Ĝe ?0"4A;]q]x;p}d=A|w[-oo)!Z1SLpks@=Aq}ƒmi,=j!-q>r!qc\uD_H {X}C \os*qx˸)-)U`M6z[킎i ׮wKkբz`6=Ѯ Ui`}VQnCc.AДx { ͮdHX(x;nBA.޻O\kN`Q6~q[qG~lU8O܏. =8̱ujfK,L"i8OqOUg~dr쿮zS0F0է[_Ӡ,Oh>EXp/g0k3v~Z__sQ^gC&-7:w?|WN!֧R!ٮ#2A9(hFl{6N]~t¸NAE|b2%Zob<\7hu}86``[ܩ$Եz {,>+_҅['e1cA1nk-w>uQ.M gs:dqREE~vhya삄oŸk R'y\ܚ,o~Nړ".=: mk,^s9NhLWy* B~7=ÔI(f7'+2+OWNaxTuFDU sRqtZ/ X%>K]P-o@ F4. aZ\(;OYުI )T!飮 ) 6W+V+wHu#>ա3A9lwV-xFׂVL䈿-Z _qa\{ kc%DtZlI!-;K_參)兲zaXh}}qbf戽{5 cSrj|;ɦSSWMu`̹`1fX(gy`+ AdM_H*K0i|QkzIh tVq2efs~ron>J>^G:H k;\Lg^L›"y_\3.(zc^$ߢbno)(Bo{\.{'f۞׷蒭]wNH[vaYDW^zS?}z'6E+&7orzS,+WWFrr^5?tzC-+! ̧ϣltҷR*(o2tWBqcUm!Iu\͗t/m]}7œlbWOu.&IB fJ#mNpNS+ٿ{8$E5)}m^,x;ȫMIFQ]}* ٰ`YqpK$R:96!>Pf1\/EaYY|>jкnKl O#Nj";!7 x0ORtIϋ7k+g0Ry6`'9 DN7ؿ>,賬EŖk_[W!IN7Zv'LͯT ھ?{݊%S67mA B'9z9w&8{o?y|q|Xf_?~SRwz{(AWM}W|{fg*em2 .<3URil< o"eY FS{0:}(gs2Z[;EpI gHV:Z9geS>InٵIGc1Da\_--,'Gk nK%o͏VŸ#[`D?,˚L-ծdiA&@ : 돡vnWz/,B(9JPyLWRid9kmxF**Cdn\j'?z]nTx*XF0; 8*3CYpzQ53L쏡ܿ_>GYwy|r5zUC㹬8լwKwwmk)*}x_MmqApnWq#A7~o.fr"~8ܨeBQ' E:IBc W\LY/?GڧQX=G]]"g9Hq##DǞ9$ǁ$bn06v )^H[-7$њ ׉ϻ.fh8ԕt mi A2@sHH۔%-8$ 9/ɕRb#d߿SRD!,yJ%3+hݙ` jޗf8 R)'F",aʳ,$lT\d etj\1}I`u:{mDHad.* 04% 2'5V7x\Ykˑ'vZU$,G]i'9fq=I0js+m@f-`Y=]_HJ-qˤ#fG\ް'-`|-Nџ`f{۾Of;0LFu+.C︈Wsx5U^ѓ} 3ZYy!CCpO?ż/<шV~^\hE1j.۸xqU)x: ;.b\Dq| ruSʺHI:PRETQ4GҤ>'bQ+ز } X=^EF r2Ę {6gp+ԍҸC03jfBt`p):5ZD䙠:bƦ e)wsr? ) ևҥ6/vȮۊH*%9b6P}9bԔ.Uh\*CƋYD7ec6K]sv|A"݃k F3=V!@atl-2Y5ND 3iQ'!h&á{ˆ}A l0 CArʼnWK w4&Zĸ]KW1`}NM҂\%CRMZJrĐ$GN05ךo_I-@[Ι*h ,<hi[gu#5Q=I 𒌌xȸO0ZRGʗjnyȫ@X7g=oK,, D0g xmy\am.48S!Ei%Cu \㛷0 /p#Wj2K1΂HX1^lp-ΛM[1I[xO2:^Rn:>vd [H\% iQ>9 /G u_ɅxbZpL8521z6( =eQPOm(|ǰlHO򱋗vsuT@l^R*2u'#Sy ɄYN+LY+@W+gހPziQ?#h>qaf?D*cY/rFbX@N ~x;Oab2% A::mwAX1b Wl,$.b)lԓ}ibR^TIyQ_Έ93 /p$OΪY؎T9Q.N?rV X.Q;d#Yf9MIrlilLP&oŀ }UJT:KX*E[rDNx t2񞾘 Wx\0EgYhC JbbruPi-u&mWn,$hʄtdjӦJӪRA{~#mޗnk#[-΋ C`򼸊W PV"c*^ `\P΋t{!O\C]HIGKBy+PuCEҿi>!=ʂv6QNY2ls?4q_#"x#XHSJMZ@ zSY9@nF{Yr:h=S~r~}W JpatlzRe˯{Nt8 $`E Z%%+j$z#fjnwxOJowҩ2e=fmrr-P\m-epm^/gqkz8x/FO|*)ԙ^) kc^}䚺(OW)쀵zrӟ`4pt.#EZ;`  wRxӔkUe\4Zy7% $y _(g̥8NT'wD jZ(D%|Q:4W@SNP. fO\UρCL 9k1^tZ \% 6r*3a09V}~Q^ x_:3%[]{í|ZޕwNk_QMaQ&Zd5s.Pdyf:Ћ X ngfP3ޗf@'bFl.nᨳ C6*DŽPi*B f;&n|+ *h 9f˝K T]-XAI+\T(x=71)Q=9K@'uќx W\ǡ"[ v|rR<_Sl1aF\0 n-AwhTo$'Āσ[lËHh Y\wHCU8~]@ّ R8J8@540h6Ej Œm(UIE!9NS)&@e>nE@h0N ݍtp O4*}fH%5'wٍ rA-=x;kJ4\7t[PbAt i5^"Os#v )f5k*y* 70qVK0 qͲ(t5TO/t5r'n`wXn|'cz|9J/[Ք]lƌ$`|7!u ?F7[8r0핇~t{Iml#i(tj`R/ꧨR[9^n:xQZpLӘS˼Do˼\}TMw)pw^av82'c$=.iPFbS{[44dsV90}Hig4E~С ~l1Y+|r'DeFD+C+@8WNNli0ـƸ+?y-V{*/ljvaǀ6TbD=UvmjxԆVh2/?YޛR/ܵa7dc I gxsIJpQ+k) f>bZ-cn#9 %侸:pByG-s44 /GLX}鐂-8&;9xs=13֥QS5pJE\~߸a9d/y<~~gKv$ tb6dreXP AL}X n"h~[:axYKcp1V(Zl ;b0lE^E:!G:T vUEB(^SEv*҅@1Fm ,f]#cJgP:1|D)V_‡Q3aE'fa(it͞Ń_(_T+!C9pƽ%8Csy=5ƫ}+'/C︉Wsx5U^B T& jgTd5Y)1m AD}8V1@>ڌebb*d 7E pXOʰ$ѩq_ ZNnr '1X9]ƫI3sH#1n g-ώ({0LCJ=mf LuH!w6&Dhc:sG?0Ϥ~7~/Msƃineuo ]swJҟEޱf) {n1EUv,%dT,%?{O㸑_1p]wY  ٗۙ[Ҵ3nVw"XѶզ;l4Y&"YYƵNA  Fx(9f^@`3)ʞˠR|̀. '#g͢Me21,9"\vs& "pBBz^Au@:X^Z+*($\7l75\2lW/-5_`TnC xƹJC&.4%9ۋC 79X$c\ Pwf?><]+xzSqTͺI:q&1eL\p:zy/0ZG+e1rEECoNWL,uxp+N$Ƥ H$|z+p\ɱjֿ .Ek t1X1 afֵT/G3IysA݇}k9N!)5 0:g].cNC<}z{@K^PqzHTрo&CK rB 0V2( U>ZۯC6ńcreGF}+#f#MJm|N]=ŏkV{b5BEcZ$Ef%yXvvn\yWǂ;Mg^qp\p+%VeA a0HeDM EurChP" eD* @&=ϳ&^TtEq5Ez_`{ω$E[rW~`R&Z~X]fqKY^y[2/"Xf V}],oN~=l>'`y7tld޿E`['ˬp$2D R_.%?+-۹⻕E+B{{ U|s 7Of_m~ľO.A߯]-\R"cKE0XW{:o| x]w  һz}ǟj?-xlgh,nGotfiQן}hhn>nk8K27 jvGKdIFQ[(so6] ^d?ݍ-9Wi rь84δLBΙٜ$T'ԗ!VPOr?xŻV.׶}Y&*(ul\/w2,ʏIqqSUXoEM -ozQ{<"^/ϟ&~[ϊ֟Jߏ1ƭmgKosa*%7p'եp{G R}ꃾo-ܯzœn7 7+4{Dtm1}_F5XoqT9|:Vk^<:a3^U"ﮖ`z-O?Z|K_Tj!4NJyYe*~#S_߿}}p`u>xmWu>HjiQG_'n cQ](vo߼L&?}k(P2pEzxkڥEp[{IuY( <,ȬH<SbAc%Y6?Mbg؟L`[R-rT)Ɖ XAG$tp1^c P.Wv,#d ގrnA߈q?ą͇j`PWMyz?0VweLL!K< ծ.crQb.J%U&% ٳ.󃔞 b1s&:Lt&<ˤR+d3!bVb al X챯=9_ƄђDZ<B ʗ+zZWٞ4x@g-4`v,xNFV)c l^N6bWϔ 6 *ۘd; EeS;tsnZPxIKuI=өxF~MeLbu;Q#pO|޺<ˈMD3eg9V؈'0)c1Ө;_fϡr\''}DRfD*=3iO #*PE ,XjP ,XX^ΨhLn?\aF#@^#-}ϭ_ʝ[g `0?|J%sV[[?LfWnЇ0=ps 5"0,ktJ3K__^uzw~a]ښ+5Pɍ\vq{m 7BW/p? I~ZW o|:ѕ~Snmé?t]UE]>[7-rW,6imG^!ݍyHf2d VH&:䌥 Ia 2 n5Z%?\љpRd̮Du'iq͝J> {h̙髱rH(P kuYZX]Y~ŃZ|<3|,7.2f2;Lty:QrHv]6oFbaٌyfX6,HRIxY. P'޲DGyF'hޮ ]PA\6T˘ $*8[o-bF:. aS'8QBlDaC8PIGp#u41 aO޴uo[ӕCaAZzjj |,/g V3/!bCJsGGt?EU L*M3wP{h,>P9-qF%P(%_  yG+TX;0|0uCf9G8,XV!BTqact |,??Jⷰ%(!eS"RksSz\QrH+)K&,GMUQrX_1i9DtghxQcy{M<гP.CI &SLX UdGSqclDɘo#Pzh,>+|@3{"G!@xzއb$]j0 - F*b-`"YG=4L4G7rY뀌IIGL$qe< mm X^ե1c2M.Fq?;ګx2!ħ[gw% 6y3'׬VG,i5i}bqvbX3)(!61]-ú؝|q=朏3:\>nƜ_o^\m>.>r^TڄmmBAx--`bULVWjUcCr{MG_>WLwQ/s,5';Xossm'DΦ u sX4NC\Emw-ΦX>z!P8a륟ȮVG? ot!Hu,|X[RK5%$:=n |$û%^{}J0zh4{QcYõPGFӗ@n x&pMv䄃&P^͋H =nBzS&yٲᩓ`j{7mC =pPQ,pG2Fje_'gXj%qkFӅAc+rPM@ P_^-)?l4J3IΒ'^Xz֫>.rxF=՝]<0SBugzm}S.c!7/N^FxzI{/ϴn/B F(Lqn X{y$ںV{Yb5Dck)x6z-1D!!+/h](^`DeL"QjT8^~mZtӏv^ hI$hJ s#8poeepwyb|t]~זcW-c$iN/-Pꡃ^ݢK q4Z':pxmh]cjPK8n#b D9@p$|KLm%RnݲG_]Iy07]%=EzX=;ܠ(mdᰓ!j _GxɄ; z9v7ۉ5ǸDp#yL[a(dv5XiLiq^m(K(f0v vq%DJf%b(8_)vO :aj7Evj%Dp{q 1x)"N)Q poAoB;A=yǸb%n"ˍ"fŃ:RdFlm'B?W _~`c݉]'OHG*.M ϏZG;譃``c1#QeIUMobxjz NѶbx쏙|riEN@_,MY@ש-FL焛XݭX. d|h1mR3cĕǡv&)˒<8orí7^&0xضn JGU*'mwPruJJ$ui묂k eN r:l ʉRiF'P z$^:'4R:%dirnYKatAxjqz!SHo e8}&tnJbjj2a@֎_ױ+eY/1E<70UHLe$)O%ibyӁN >%+ r6|q1yϐ}WtTp1Bib@aD1! &=#8P#@(ASFb!ah˜iʕagGkPG @&$I- +4 Tdgǡ<a#=zF bh;H 㱛ƹZgu${A)8 \$N AmęAlqmyxǁQu֋ ւ_Xp9Bg{t́F߷=d]1ɣf"44(3NL ȒT7<<@FPiv8-].vK,lDq(ߐ=ZϲZ#',|v#MKR#`cQ3{("!,&h0%2VT\Yrڶy)K n2u4$ "3 +JưtLg&OBS'r+fʚ=M3.Wu1"u簡6*QqC|TpTQYN#b 0>]8Q j'rM({:sj.LyFL =aGPI.0$=AH#Yf͈sW <H#ϰP|+?6cRv$6F#Ad2j0ഽOG\;8rp(4w\^9I[~B~{(Kx^,xݺPiZ<<#/gPvlF+0ܟ<}#9TytP4Q!a4crJOb-*ZgpY^ w;(>x;8I/4=GXǻiX=" Ǎ[=ݷwh4֍j69Gpkg `lusH 4,߲;8*ۚz!r>ai(qv[cp9MGΆ:Üڳːx.I`dď N$ԅԌ `BꜦ>%و$I%738*MiƃHf1*C*&O$7X'ID4xx2ƚq)6:u8x.\.{F"钉tF8hv;Ӭ*3$;3<bE/QGL~3wu;FΎZ'A-z~8,}gܴ` $\Ua0kE`k?6p ѳf9[jf`j߾[m7qYFn`l?;ҨjnYK]cRMvvwRZ? Ҙ%2H[/aJڐHrQ;_؟JV?ޔO6.lvr7iev ^_O«4/F~,ZF i ë~tg'_hcʺ `j sU8{X0vvfLuYYaiV/a͛R8/:5v3˯-1ӟ@Ƨz;6Pqvo^3 FIݼ(wMq^,N 㭙퐂 ;v3KlzxkY`=4~?@nwA/&)N6w싿j_W$~}YwCn泺[ E`=u}0n#7ߩ߿Yxo?JqGAU7պ͡*Y07op ϡ}Z,Q\t4zKCQ\ּYb0!~y~=([{𚢅jl@6yq+?6*8S >ĸyٶ+4(ˢyz{~ʖćt@iV&? o`k]oà&[~+r `v\`Ea;Y(a`uPeOWẘ|Šߛ̔Ͼ jKݬйݻN߶KdEsvb=/ùD?i S!CeY=]G܋b>??,0q2:淕}(E&lAC""q~3(nze`iڹ{o^azDX96ZnٽcJFx>UJbN^6-'{ueNGuyn_ރ)? of*b #/\;.9mkփ"y#+ ŝPd{!DL'$|ԗsAiۈ c'6!>5h]~>DmĆj&] w^QB?ȿB\qX8 8Ab#JYTHj׻cDr&9xaaW,]U_ZzIłT,ne# +]ST6Z.]%\RZϋ = y[KMKfYQUnZPF; %"Vyr팸z4̷vJۈ{Wh9_\=}Y` ̰ap 7濊DՍV5J΍XY-OqT-}M6]ͥ7 ,IJd|]|!>#HDe#܄::~4qnZ|s{*<\K839ZgRtu58gmg-}ύз<ɭ 粲 м!:8 jae)8sWZWn@눏k 4z`Zo<?;Wu/~Yd*pM[gsX@T)T.vR&ѪCUzkw8~C%ƙ`})Ze՚mzfxxFj`pqB<^!'+?'%,F"5;䬣'ǎ:2} ?]#d9{SClrq/fV.k/JKz׈`jktqw~;R F^ཹ׻̓=9[dŊ_e4h>Em !Q\b*vBqkcsE:kh좖1؀פ[ kRV/ a.u÷_MRnrw5VN&97gyFʎ6tpn K{>>iUm]p H[Ei*xLy 4V"FFNDޠ}˟W=IVYtkgLh8>a}t=pW̵K3%O;S-ZЬ>k}{4y{H4y8I uk|:uD0lTJl MMJ _69 VNyO6 mr!X41Ph_pp+# Ո]n:;dLWFCfBfSeÛ҄Id`_|iL^GSrf#7 b-,?>gG^6+R\ b}k"ШsM .٪ka#)bhJL"Wqtn%%+9e4 q8-&蘕)- :r"ugrV9/`9͕Ra85TG f{X'RJܰYW_UuaAp`  1 AJMp`r̘)\`%MJ/ t_9}2Fg Ag;"`f`H){$ L `jLL-?xM~u:q7Ab;";B;BVP/Z>b+)ӌDbOÑ("N  *Uhq<8-|Db\F 8:8~:Sl%33Ո`${p.Ƕ72p1Y&a Lł*8$I#0^HiuRJ18p`bH6Ldu=Ocp={-ymWY0@ߘ*LCg}6Ay%oRmRp^^Ol-O̱?]ܚ}oyV8-#vqM -srkc8qIE R Fp)$JS7\OrVp *cRE{[04pW!0~`gto6d4^UH5{eT\/pӀ띂 |9aB$$Q(xx׃47T8&z?tYC,iV0f%m4[O} 30J.lǯ )ʲh [Hb4LC?z‚Kc*0'd9%cSҗl`#̱a/B;URƃ0O >+?8YZvLśœScVDþegEP6ǁ9 ĕiIkƄ+JXW,~͢L;W:T0IM6#i۝YNGM)̔)L]ٝAv嫑ӓA{5V'y\,6Oˑ8Vu ^rb:AZ5!s&bbIb.*No$ց}2 Z]Edە`܋;3Ԯ[U&9rG+LߞBB( x4L>(Bhv!K:Okuq .mes6T߆~3~m>XByw-?DˁP~[y:8,'uF6+@^]BU:}{kͭGRpcג Fs:g*c3bIAPa0xj< 5CCφCPa0xj< 5CPa0xm0xj< 5CïQFC0[y+YuBj֯@ȎkHo>OA[nwn8 N5;f _c73l'ʝ57jJdS&ɝ$@^*c'i5X\9&F4ڴNӪ1a./c0ߌf_[oo~yd3䪚qxgxS?njgw3= |AiTND)%@\IQs՗/_+1 d% TKbhzCXn"DX`;ި/s_ɘ V r>fI?Ǜxs:B6k&RG;'T8BrJ}#DR5o8u +AMۃ!0<7޾Qgo猧)( (EJ|T2i.hICFV#ʡ9. ~K'~ !!_ޭ )t  '`wYBOVݫ%j8#LB9a#‹mȑO"gj̃HB1CB\uV@5]')-}A\`TmqBD9il>R+%ˡ/ E;2 aL-)۟ԻPc cou,gk^)ڧ"RUs"W-`! 'F r o_u@˱OXV.Y*`j[0skG8eEe"G#$3hA4OvPrSe5X)XPD آ؂>P]^wΒy>.gI#J,g1)m/]qahܡ_5Xo~a޻,(8ĶD"y!ɞ\dAyռ;ƸFWt%tTPNsB ɠ3w}Xd:^&l~atp[Ԉ/fsp82n8Zqz鄂lJղQO7| i{qn5m以t6b[U!G֎H#ŕ2*# *G 5iؒibzrj]uyhGc]曟/^\Nٖ(Wjqn? i4|Ȗl6@Ư.uս3;^ w9ƻFZc{s$.tksCv3,COjȒ|2}^zsx>QLu o6QkD>K琇^]~z1?g~^tr%VfS}*,~v&WY[;f%f`\Xn='PNuxT"zGd@IS-wiM!67_W~xqFݛ>}yO/߯_O/^>%TO/ ~O)]Ͻ& U%/54ichKr ϸ$;Ƹ ? р!v oUiJĒ=Jnb1qfFo;#QAa^i9>1H fHa K # OjS!D0Pw+RLs~rC(X3ucRD ) ZM5&$F p%M:B⡴/wm A>GQ_}w sAmo9@/НSBH.s4ӋQ~&}{P"p@F ` <65<.5_S+oI^@ {;Bَxko F521=CoŶzD'$)ت tE޼I5&apUAKl8}c2_ƚ{b(}/M^BCԫUYC[+1U#H Ǹ ic= _\8z;Wr'4󓖀'C'"I{l]ja+EH;˨zy]WM |Ҹj"r10:Ty=fVրp:ndU:qzNȄ|W].ؼ8̟9NVb8]Y5ع;3?կoN~yh!!#! c.Au)rH-& ꩤ1sɥ|}%aTzD?[6K} Dn|#jFI_ IZ"c)G1xU)]φ>9Hi_?آjkU=}lt#zrZ5ػ2eNxwp/paCZV80)3]zQYZ,z>['44Yuy+дZml,k 6L/muϔ W?F HH}?hquų D;t_SfC^Z-ė҂ԋZ_?ysՄ[Nɫ.WUfob44i_i Y,Pr˓cqQf˥Rnռb^[y/}E++6m9oq+&;]e0ۜdgn@ʘ@e6M>fNreF1Hp!|oS(c ܄IA~ FJJ H޼5گs? ('kX}cfrzCt7 ӝ#=+c\ ^3on'] t:^uyAbbTҌ]û(e\oM _Lㅝ]|_<[=`:P|d j x%XkWFj Y(fap,YwQr֭ޜF&2xNbhvH {f>$JYJm0'߾rwn+y, ͂tIpH.۹M6j"kgżA{2DZ^lcorස).Xku8_{ jE&f}nDh}35`bsu?߂= + *Ӛ<\ΚʘM \ }^\O[,J4fL0hϭ=KQ}y}OjVŅnjY5k.V/? >$nna  R^N>#e*Y` (/xT]y9UZךiH(|9L/>Xt5eLc! `.F60F,x+HA;?{DȗlgG}(cü8=w0߃B #8ީb(8F9ĭ8:PyHwR$"αE:֢E-cw} E7g^i) (-HK#8˺PyÍ*J0O)f'8hG"6CЦGa*| EW`"$$ `V"MCdeTPy&XDD 3 rgx"Za}(`^h\.hIBFl]FfP(T4e$h,\WSA]9b|f ,Pyhq-$D)DC9'39J 0+7q;A{gcEla0(g4^B| % 3ϙc2wNDBv>0fK@$׍BZ`zmJTPy#*0/u+8%ȁ anKbB !,pS^1jdYI%`ځP(|.]y gU:K }(`^C-ͼGRyL& 79^t1߃B!7T$ϡ"- g: fP(<@#iZ$0q+E,kmF/\@>Iv[4.bK^Σa֑cv"NsDr7΄ 2ߦ.ԫ&>p![9|PiR1Ssc[Ү@K(ƕ㏸&UQD6i ⍑]riu` 3% )t\oڴӵ\4sdZOsX 66-tA<s"j B3$\ 4M2ߦ.9G%5UȻ<YJ[[i mZx9FW4dlVN  t@| '7W%h+&qVצ.\w)Xi$UT\Y9j%R nܥMmZ؞xɉy#Ϋ!߷m{߾M.rޯiga0>ۏxxM zãd!\ul֜,&+Lhy2_B$a_Y^L?L.>MjyvP#ObЂ27|Kk}#CeT{jc]4EK1eE; _bF(dj--{9}.9_Bʦ@!))\d 7[M8?.DW!mֹ[Թ^û<ֹۗ4޿;ܛrK`U.wU-^.o/y'o*" QI/4+ \9č"XD'TX`$P/Քj4r; )G5H)fFIvCmෛREm˘m1~rSz+oqNhrmTG`@/%彥Km g0Fi[O o9F?9aM"1W$F I=G hW'Ţq)hݜܴw8 #'2(a7]W9" Bpk38}{#PMj扥rPBԅ$qY,])@vkM0[M$(=Y9 ?Tk`̨E.H Ir')"%7:(sXfw[~<_ G Dnk-/|5GNf" GS8o㴩uk"B9OېS(Ȇ{\ztSAUNT$3ɭ'E$PB%X8r :CS)x*!`1֑S!S9gBMΎr J¤D'RaYgi)g\ Ĭf:F!KmD!@ 50b&\Ddw cR* P^t) O7pe+CR$aJ/=ؖN ހEe jIoGj`JOAS:og֒ |wm7wN3#UT0uD⑉GI<8KypqtqF+i7AxbeHV9 7Zoxk`o]{[AL%9IFRb5RZxG88Ĵ+H[w(?+\cX _ @Y t`:룶!xAliq(ᒀ-ѝxXtt!wӱt?rVoO\!|_\E;ZU=e׫]R~2 H(& 6QBϳ^U4MF^%rW]#7ֿ:M-N H2(zUn$:-yLt\<DŽIB$NeIr.<O5Zk9AIEX*mQuF{୑DWc?փlYSevY:(HnG%TVt2d{ WushvD0pWY$pZŃ M@7&v<ƈ া_/l&L!kd"NTT2C8lS[_uq~)Q4%N6}t8dT¼ƗH| ̴JW\[ԕ7U N 2h B0T n|߱m=IeǟZ\|@`L ^I(m(>T,F]@CJy&rEmNgNϝV;ԉ Q>1ch u Y^GAAò,VYl"J*je|8 h/"Un&(qdt!$x?8Vm3?̌M ,y4 hEBu]D%lG1cPȸSȎ{]mעPbRО쟆|Dm.;%E8^_rT(8^=/ea[_f̸MImN(l{׋cc<{|jT& 쟓c%8o 㩩fjIYG'$:N(Ԧt\=\-7| ڛ48>\|v~lAR{>os[o$Y™}5m^\,~TQdFe}-r*Ҥ0)Y6ij=R;{co۽ٕ溟'O8~f- O&s.?^^Ԯ7'v08x ii=ۉz:k1Ef`XQ8yI>~MYb qM6Ъfk~9c{?^~}=oc%ۭ]d3w%60Jjb~DèWQ6$2߆# 㮚zx@io8]QF%rꯓ>{wGzG}8|, Eo‚u?o5o~lߵ]uͺfZ.Nx~E$7kVPo!1݀@]]&߶XVG&""N:fpԣ$onuyۙE7n~{ڊ#gb Z3xhIN):{,VքDbp9/DiFDHD]XӠ(u[Gf;8νyn\ڙK;sig.̥3]N\Y3v\ڙK;sig.̥3v\ڙK;sig.̥sP3v\ڙK;siV+[A'-NM -ڒXj+*5AF,K[ ܾv])a<ÑtYC4vך""l<HJ%HDJ'A+^D[PҠmPV#dD4 J0#$`հ3mù%z@í҉aFTE@d(ul*%TXL,]d(:IgΆJ-,Je`7&Ys[P0p?; zv,FHb זhBr 0F\8 (\9l[39KB DHBiMTR#*#+TT!H% 5mYZΚzvpYMl)GP-D!>q8͙f(U$z,PWVS@%@*p\bװ{Ĥ*aX3!#p$QN{Ŝ{*wZ jܸ{Hr_v/8b(@SB".\*WC${J|6zY֚L n)'< (HNfl6k@#PVH+98nb8bR%c LiL֟d2\re.U Ts&щaM)D bʑR]L1#Zv;+vw،A9r5Xm}dAŽ/>:z^ht a>9tԨ<_ 0.yMثKIQI)%3ZFLHM0&32zVui'_q="\[A3(.r_k89uf'Ő z@D=K46^E\5$Jb^<Ԛʕ'oKA#.NA;BVr^e F%ܨS)UgV(S>ahoDbUBYJ 3+yF:H*0V]{d_[-/jrj1u-WMwUV ש\6<ڇTMZuXzZ)u=luF=jQ^_,PDUm#E~1qGo}F q+NaY'U4c?ŇOW"HcbJ?z*~0낌^I1gY~f6zmsKwi^&*z7,N;Dәz=nDVw]p1'V$jŪA&v;0I3^?t2iqsvf*Yqkq5fPl`NWpbxq~<7b]؏MVvlo ƋlpV400Xd&nl:wS㙪:-U'QÜAa͈Uhտ*2c%R2|%gS4aKڸ\&1dO{]ce6,˳K)HkWk:rIn>-ZIl.}* v,5PU',4+Z~"XPd&TM}mXSn$gpҰUQg^߇Wa?.|rw0ۿluUZIB^V@sǚ 3LNoZ^y\lkhKP~j}ζZ]d!ڳVTR))U q0O$ V0MhF1uіk*^$EKpx3zKWDFS^JOP^Jx)wz#Ռ~fiK߳yd{yDKG))Y{fZjgt#8q84팕`gjq S%:+^PLQqЌ {%PU9^=08w˓ogc\llhn5\z>ud:{O^jǬJ7$@@3m>7Ġk GXRF)獘fqKlܓŊ%{sA͛E^dU0[{.)/ζ:\X6#F7vAadˮs L/&֠\=Ee6:ྃavN*liJ#R+08950tc9ce5$k HwoiV/qOlBLvm6tyKwdF#hǽnvCo.l>D0 bw;ض^lcIKF+ 'D(,܉2>cu&9a>˞ޮǚ Cob!xU"E)&QȽ@so>9/EݭF!n@EU*":4 ~f{zzEqzD(wʚ6)u~;jHUbVtǨj U]ܸ[n5mwޢ>qƜ?Rj䆝vVV.Q'@qۻ iD0U&wN#|T_g]~ݛW뱮m. qBIL> L,m6ߵؗ_*(n=݄m6ma$"K&/Ӿ4L#ۅv7GlU댭"]I-:ħeހz!U$㑅H>HԔ1тFрG!eLD/ na0:p8դCMid̿p.ZskaOӘG,1 4wFm6&U1,%! j@ C _hc'Nl_ړ4*\W{ yN})e_[ATÒ6Bx3%aY)JL(.S# c4>ۼ-]U.g$';2ࠐc)'HK@I"4e$rw:| 4`BL1jv(<@RJy>%} ^#> Vw֌fr~@{6= ?/Y'Kp:g"B DXR⼑r+a0BBz⦣!KWQxW@h!E麗-43ôS #R"%1) J3KЈP ʓH&J <=T+zS΅a=,eٟ%sj1Zti06Ӱ8BAh8AY"uerM(#*@F X/ 4fj !rϢ0*8 S"H?mm ≯T^vNI KĔI%B_ !aKR'90ktSR*{Qnez=UB=Ev"vV &Skv5# H)D0<:(#3 ۽>^.'צ*a YiC(XKTX]%u_ÄtyЫʆ`,fO$r9\S,̾~j999_ Qj^9é >ЄƠ` ×K KȳbnLfUoՅחO/NlAgb7bqauz[o涑(Qz9~2vCChYKan-],k,mFj {PDzFZɓ{Dn\t N#V\꒱JYZy͌CjSzVzuϿ\*q==J_SFot ?I/f&Kzřj^V}sXUE#\70w+g?C0h?k!0B'BQ@FskuGfa +rPzvdj y!9pHX3uBTF n&h#%XXDCELr膭%T+/25x&zV6^-y~[z_!c̱r<t` $QK-9+RpQ^ t]08l(F>1ʆ%4C=A*F;Tsrj 3 6THCKK)FQ4Je %1Z3VTՎ\㙴|8"~OyVǨ-VTX$jB>'B(iBNTdmEy'tWl3t2ޒr\[V2ైP\f?@i{MOZi*:ϊ`=y:v&@8ǵ r B($<3@cLWTk%i9]{%즂D=h[QKs%=&[g'&j:qQi0/aXnjErÜ%Q [ZcI"mC?{q 'X}n]MM`UbEԑO=hHΐC{ȡ!HUTuU}U]]%\wnΥx>8L)DSeTO~'y! `Ra<Қ!8 ((n5ї.f~w|z#Ca=v(";D`9`B03[;4gQءz# z_EsCscsDoAPbVF"GJy ,'LfbAlKܨxqw W"m/Vn^a-MU}}F $|}4۴hR}^`}3 92cjbrk9:uc6: vo {{4tlM'vIY"t8|8+Z>C+V>dX(od{- L.pխk9UrrR2vBF($eg>{_dec,5{]F6d\vUaL2Х}i^zZ @BS9痵t2YוxC== Sjdb60ޱmZlIZ{{6]'`穖Y2zikfPﳝtԯc/ k鍶PvHUtIHH9 򰹀׊v-Z+JOn9Ūso# , e7ǥ/=Q^H7DgݺX! Y %7gL\gwd+r:: R ٱz-HoOE0nYj%JQ҃c $ XBh "#Ȩ^ZoG[}Cr(¥+xz jrQ;i\E}э ?LliF"覣e^#XW搾v:0oSM_uڐ#8~ՠƷ lӁ*S76o\'-2'y2Jj%hhwסeUPVg8`??fϣoO?ߏܗMNiINj'ng?p{tx b .$^[VRʻ%=>!@ 7NH&rὀBWS@`}̵[E}q:`a" kf_F ZX 8^Җ) Q%9O-L %snxR=u/Qy-RAQw~7Z|??`BAx3Pј]x7ϳ4]?1nWy_ ?3GAFk[dc6KWw?x8 GI-S%ow}r_~t XH*Z:}R{C&b x*lO-r,mjIL+*>E]6Oe&q|C4bσ]O:~GDa/c%գEOzaK"Fil#Xy Ny &y$C vPX Jy b뼸;ZFg5|Oh<߇~Yng|<~`KSڿiزAoJ ѶXKBf@J9F4F*DCa F>ʠz z~ef\=ǣd24?Ѧe~ixQ^^^{sP)^d=hh^_XX)CN]԰zn'SuKM ͦۦqQ_Z32Xkr{)fcs17$IZ^CIy)ƬJ}y;)v/vwf<_*V37M>zh6SNn bVʻ+sm"V{Gΐې8{串Jg":{ךsz@%N)oIo@//VlN\CbG|G8qd{[ uaP9~,5%)!wIA0njlG6l,Q [&PRoHt Go<\ 8wN9ǨG-tjFi-!!boayo+I5U~rzty\!$@!c_4zƟS@[#L2Mb7~ykyٳVPL76\=MA Ґ:/I<]n ZAUxa=vʭY(, +)Q>ha]0hۃKҐzZ}XnQ0~l_}Pu2^,P,' #|T5(̸HZZBzD#y<0\{`/{C)hA! 058SH5LÐCYA /Z`u\vbtlб1) P%J8Ė"i 99v˜gXqH ҆бi SzaDj4Q]}W p}ۓgS^n==ͦeG(>[5?װKwTڬ;aeF[}tuϿ7e*둟g` -!rI[o0)q6"E %ُWL썌f ~#K 3~@Y7Ѧ§|M[*":N95au婵@t,7BʠU/:nT!\nz)-w(fd~,Oj./21B, wN`'|>@@H0`J+P4 <{MU.TeE2mVs,1YaKdZK-ߥX$@%r7] :??/XT[%M`:2Ԇc7^9m(O&C |aG:"P0Ӟ/Fx=&<TLTnvgU}`t0--XOqk=?{%LQRr'uF$W҄6YE@2Þ?xsI'<Ù@oXϚpjMbZM8 V~)p}oް^k;nM>z}#L&S2QnW%nR#$x ep_ilMxY~7oS`<,Ր9xs'X/LW [!N}x3 yЮ=(  9* 0|)03kp{T$OP b*x/$ fZ=@|w.ej3ZVe#5l.loIx_T=?{۶ a\-n(! 87I}M= }BdIi'n~gIJlQV,U#jvgvfCof0{6 MWxhNhè+ӿYNy;ɬ} J~l/#Dh4CsKFD¿UOgMyAPSԯS)uQP]?Ub ߂oƯӐGu+Wrpw]U*]V~v' vUdhhԌtA "rs4"LϟH7m5N,e섳UK.y]u0:/{|NMxhe0BKb Ygp(;ɦ70oud`1j !Z f!g qY  07ޝ`Ov mt 7UxZU֛{w^J c]p5ҭc[I7N/7'9QgEKrRD}-=*ʧP9,@c"K7>qGى)i2n2lPdq]2; V,5]g Tsr`'[7![vPrw;r!J_ȭPcePXc H{*;Q9~+_s \&,,yҞ)yҞgCfq|s̵V[Crơ#r˸.\j59y=>:`s?{Q{@=Gٶ :bx3K߸Poʪͬ< F?ʓln| [7xff/=jR@uwx10 f1gDH8>MAt7"<+q RԦr6e? Q5$0O[dl o[D?kGF㏣B+N[ŤAC:z%[T}4`E혮#(h.4 Η~Ud]9>՟ut3KۆRUuUzTT嚈[;`߇^z)vV:]a֛n dVWӱ s6a_m`}1]iISk\vQ*T=8B0 4G"`KM-?i Xy$RDH\/De=4ԛ%@tp4kajt譾F};]}܅;y-NIWm 4wFm6*GT KIw0م1\ћLVƢ*`I&NKXalb*GH $tWy~<QjrZ鹍jA3Xzbm:4wOMҌJVX<;+OGmp3RZOQޟԲz!VTmYY\0zj_iW3j+k0DFGMdƏGËEA9M#h}3&ZU"zjnRӜwgA$mt)k0{4k$}~;iS5$:zřH ,afU@O]b?=+ Mg'~ Fuq0o_ώxg/^c}tg0~RV;2vᷭ:8yUެiMKKo.%~]p-N}B`puckjp1iM|6ba2fKp*ǘa[E4 >]@ vU] cFf gexPQR;*LOMqx<7:; B#qR)Ĉpd4Zy1CCH(Ϭ'fuvd ~;fytE(3uB!JZMGJ50R31XP;Y*By6R};/ML'#hH{c:_$nRTUKuw`]b:mZgny?\1&ƨ `\0s,/a>,8kHӭ>N)V0_ǹ%z}SC!tF: lp6HF_HysQ Rtb3[* Of)qNHѥN{HuOJ ѷ6"7C h6]ld0EN93XV8d@]VNjvcPsL(QmGs+:2,"0pGkZ )$!Ԗ겢=r ZMŖ^MzSvr!NuKWO{w?@O#B_Ga5 iUs?H]̙g4G륉hvuI|wIKV8/yD"/+Pa׿$Vt"05zONr.ǐfAi|U* ctr]ʍް[[y2(:o;cƌL"^ˈi i+%bg7b~qTN/5-գAn#3MM.#Vt\̆mEKn}I=^qQᴺtڝuL9h}Eڔmɵ]+Cv"z҉U u~v+Sۻy#s՜?)7._sQx}+*kn\z>]7uI[WP:)e;jzFPPdF|J}E1"&ԥ..*5bDnK@k$yA#CϙJ i*e sI.l'U%d@9%<39cm *xAწuSYD#a  "u OрTh(x4D wh׿tY6U-z>h֒JN~W06QJ[-h"Z#xEh@l Bl" AZ!ޖG&BDkIQK07Xf Ǒtx""4 H + ^hzB#8q1p?O$&kgJɘ NLQ-Q;ڍ&&o|#wiF$ !v<gWJh &g@^p"&]㼔 U#5 UG*#.c\!Y#XYE=$PJBkoF3ٝP7?&x6as'`>ʣe0 ٻ6dW}a{?dٓ`W0E)$%zx1EqHl g4S PRLiŷf 76Yד638ʜ5*qb,-4+ \99b&E@K1əYFhK9I1cCMPP(`qT9&:K(KIy#Τ\wqT3鈔2hf΂g9I$+8T-3vN+=M`\Sލ[q7\(11haj0œK*8hIz?YY)a|k}񧜃;)6tivx`2ƙh"@ܒ7J2Fcx_ތ'hāifߵ$r䍣H'FS:Z,,Eq6 T~Xo760#&k|_z5$Y™};mvb0*#+=oNg-_hPTţw+ЅoPj]}=xٽ9͈Mnr528[37iп-3v)Q6nҮ-'NKz'.6yChYXޣ"8im`o/g?pϪfujacz0?ԗw7׷yRYojFaqnF bC?DǨWQJkn;&܌Z 1Zy n+,+^T2%{{J//?ϿHA|w3¿{Dwgu aIU¿8d%/ukLnHɧM.%X/%CkB4b@W^6ccM< P2jnRQ[{lv+s0v`'cg:rHhњ dȎ2pш0ߋd?0X&$\B$f a@dF `kPّ%lZUzgHsnE(|g(X)b+S5&$-Z|Nd;H u"8NnNH1x!;E .O#$B!KQY]tI+\aйj VǶ!uX0}QRZhtiM'hCL$\T'p!H eH]:N?Ie8ΒptypvgKnFWZ!Չ d㷠\ڵQ.'%S8OZ7qQI.Q1 KULSR[jM( ։g4sGrݙq2Bذ5Hd֬qs4F̿>eSeY[ԗBLs)'A9*̒-zS,*!p 4.` J1J Zr4 "WYU-ߏݚ_U-ګ|}ר/Ѳw،ﯤsr pQh18N :g(b`T,jvR9 Z7GMvZ,EIP 9(C ֲHTDe[p d*Q)Q$gAO!ZGPo0T:RDҭhJ gvzC5hS}φї_G)X~ͫ^'lac%.zWj\:pr\XjИ8GhtHjRkB2AĭAR,ۄ:Kxv=a|9#6=خmgY?=A<6Lq 9 ]`RDȃTB8Pi e$2mt$ C )YkH +&"|;G`B%ߘJ1>jx" 28?HpIV( 0ֵ$QLBSX;gOiS=&WZr I+Qf˥(-u(ZlRD.&7R\3xs=o&A*Pbs\ B Q7^k"xSY_.uՕz+qUꕒ1σTJ+TRa(& ~2EvmK5|0W'˽ hnkuА_CRWQl?ȱ)b?o D&DNs`JĬuҲd&ƄH©IexZ)T%("+6)U+tF+6|>k| 1Mf EB`>'>L 3#bɘ= o77no|wPy;G0h#᪋N&V4Y2ytr&Nn#>vVz?m|d@&3#̉Q@ThӁMf&@ty["[GjG$ƀO-#e6qI0+ {{lz{$8pw \Ȼ{f Vי#35޽ҡ-:kS`Ի$t;&U@7Miwn[P[vM{E.@ǛJnhv@;.ʷTM8 t4 sl5лdqW=d,o.J+޹e7Ɂ$ =c_ώ% =vi!KM,dx2ω^T<n3">!F* c)'H#;$I B@ ^#~S6u{+@+ $"DSwb*I5!x~k~+a@YiC(X&4Ra}wP|] wk9QQa2O S ÷Sw%\$Y}5D^N/NQjZt2NNɦuC͇6_/AW%)5 c.œu~rr6{v8[؂,0R%GI}'wBm-- {%׷t imF-`DZF YL|4\7]-'Ij3ȶV[*f=on.RӜOo ~uov_''m |絹\nVEK8^Ux%4[ /K@"O .F[*oT~VN_qL~y~~x7?|D}~~yO.Rց.,}u?dn5o۽ijkдD]z6Ỵ˻rKYi7A.}-B3i]R 75? kƪ͖̩rcQnuI4'1zc` *0w =Rre~l31؛:w<ZrJ!F(fuGf)Ϭߚ1o:4,#ϫ k\`V*jŒ>R,%M: I&>BuruTy;gW>GQO>ʠS0[& Hg BAh3m 6f 2m 6f BAh3m 6fm6f BAh3m&;$_GL[}Pre RJ{.g4<0Km*Cm`J+g$}x|,4iT Ts&)]ÚRU*KTD&.:b,Guj n݂: ShCj&#Ý-_tmj9=/A>:vU9f@ؿ:Z>+ B&wIL #*)dF |, J!dbnȄ,9ZB>q=,ev+ &[&uhYYt9𵌷H hv% 4(n~&UGOpm┚ۉnב#-hJ\[3} f^8䶚kkI0n+liҢFWågM'J3S2cXX6U J)Nl\pPz`iqr0@ykl+/ƣ5z4OU0袑';OqXτK*Ljd<)V2""&ZH0<)c"w۲ZFԝo.]k463LYʍՋ/i{mdv+&^ve'dzf%Y&Q ,Rg2DOUKI VDpCK,4h6t7IgYei;XZ 0wrTvL%ȴY5jL6}h/3g̘1ǘ+;̡-|uz`LL扚o*U{uv*֩;ԮyNvmqgߌg)wLRn%WK{&~\)/aF=) T@g{L},qoAdYz{Pl f!qjbut%2AR >[*2~7՘R)"ǁPg Uܙv5uDIk}fѡ;ݬxYYQU>qv87q&r* Gs>`.0 |# T)J:wOW!-c~(gvd|]cE$C|F 8ςZS ˭GTuM&jȡezS\Щ$/ LEiGK=S\-&x#32x5sf"B+> se-o{r^'Z4۠_M eU42a2'ZK(Ir yIat$LEt!"'U2`e˛@ֲψ"88f`2 WGf9AQ.e&=r`iLvˉy,310BB20PBb%30)DXҠs0IX<͜9-s>9IsbK4Q;aS#:Rap{|cQN>-|D0b9`[*$TɂĔ#"S VqX '$F@uƊUW:a뙄2{c,0kIbi\)DC)h%FVp3F)qz]կ?N+z\:K ;*g^ΈVE!5@BiK *$Nj_Z X]߶-ư~ٹ81RbҥV#`rfjgl9;F0sh[Ƣc\=E&x nOϚ:ݽ;[k3_!hGZNV2X7EyGZOU I֥LƝNP$PZ$Ya5kA$/_?}ĄS,t*~0j&U/OI*nH߃}6bww0+O2cEC~Y*nu|>[a<7%5ïAż*HBmеP/S!S"D>RJ0& 7@S6A 43}(!^#Mv]~d1 !w'?+bu$+BXua]V,abX r 5LӰ.-լ e(<wdwdsB+MXO!=[[&&VQ'$<3cĜERXp4Q Fm{:-+7?On.8eoNHb"X0{tN9ịUh4BkM̨3hQ9b0XJBSELYwsZ;x4ĖwOlvfjM;~b^߾;,TMoSw<a;u u?ѾDjs$r;΀˔#CqPϖڐĝ<KInD:ik*rYFcBbki O8v֪z%m\P,_:3z%A3AD>qX_7/hT|Sppbq!U" ^YJ}0n;".c,8Q\( $ Q ÜM' $xD);XIKh6@K[;,q7f"YOm/G1154{f7fP^*M嬪h`/r!$V)r#XI ulNUrϨtyᓃ\D$S{fMeJdy:{z aS6hu6}f ?$3")ڙ7sZ>Ytu¥^_)fL&SR x-õYJ-%i[/m< ;2bGQr|i]tKJDVZ%;-c!:McQtDl \DTt !$h?!ë4:'yP0%Gh " N$c hJ5r8dP\fBmVtۗB½H@v2 Ŭ̺ VH,X H,g݃2Lq'tRM0R`DZdc@!N^Q-8$&SdI9 {aJHe-ܫgY昔̭|dz#?gq߻Ri4=ĩi]:/G8sЍ!PfaޅQfS7KSfzx1=NbL\kԡ1 %aWF+)W +jlQ@Jz!hMX?]\L7>Ç?<O/v燑{7MGxݻFv wo'__O?(rNooC_4e)FE4K/+nQ4cs(P{4kݛ6ϑk̓o&-5F<ŝ?Lto}%GLJI@ݭ.6wo!m#--{u$!׏t6 k-OȈFFYFOc U\;*#G/mԖR.Fmy?,CZrï_uߍsR&9R?RQS78J3MDu/̺hz1:GÜs9!Qm4y5b9t^hBb *O;{LwwO~?~÷2_}oQd6%V9t :j=x1JǛ͍ahMJzỌ+K.eܿpIi0A qh 7gx-il׃#lclvKv2F)(­4e9fc`I88 $e@UVgշ?F3/ۨ0??pʺ! |sLh0(|d3^;2U]9yw!3ukJDUkT̈́6&r/pJDADIʒ`!T #ɷUo)F#dgr`%Asung'ୗnW/߮ܳ\D+S>+¬V,[a3^!6`N҂N$K.jQFBi\LG˄ M$ ZN\0Ȩ49HkUY8h-˗(W8xKehd(!ҜTtb cD"ǵ_ ZS.YGoTR`+O)$\L9KR, F<B,% 'e._lϮhJGR4?s 9Df * lGGP9kƙ"!5"'M$ҖB`IAL2(tv䳋fsOBǗEB F)i҉+ ;A87,EO Qb4|䨁R eOogz9}f'B[k2BF0."' 0TPc"W4rc!BV1&K"Io}X&֞z)Pvy-' U&Ti `D8|D!1!#gEE7rXN0H)!MJ1)#I^*e=Ω$L<4Ӓijxm<"8fO9v})WD.v`]*|&ܠA ^@$q:DPdkR{Yq30(p QB[;}b n^TME7v2HMlBxj*i|DjC%$!Re1DV \啐H@ϺRmkbn){PI8ze`\JuJ:GPٜJP9Z bI&$48R`K"EQh;Ǎ4ᣱ&A ڨbDHFE+}soy Cb^dPg^[5V,_tm)t4=Pɾvşq|Ui>/ȮMM9vLsKюeb ] td!.wj!ʡcR_q) h&[y&BQ^}?Y 62C/6rvϷ0M]֘Q#Dh|HWo&7Rh{w)pz:2?:ry˃u¨iY Zr[MhuU:u%&(FKUD׆`wĂSpI@+\%MDXg^ghNsx9evzJ+˿0,e xcf/>_(}Ɠ[>^ienn/=3#|Zذ᧖ ?U9rg̎1;<cXcWfEx`hhbj!]W UIRK>AlIҶ%d\Y7QN][qwn/ `!tiIY$Տ6h}9oؘS# 3yޱٰ6[O3[3g O<0+YK^-4ka3lwcY9*.z li,TZRy%IE_Կެx:ʛBQuwp:-8)S‰,QIbZ@DpKdbrKg4 HPϙ[`BHh_Rbq`Y8<"DAtٜaR!/CP3kʨ-j0ӿ*1nؐϏzh 3#՚U@D`2xLPH:u1_hS5܌ (`}7nؤ~-Qr/vi>Aц n+g(htF]8/^_8%3xC-@rR!2DDy%,ϕ;UN],x]/I CRHR;ʌHsNEj}:#I[-h-\ L h*O2Ed2-iW 3f9ԐNƏMĂS(g'YPkA|199L_>\.(*ʹ H2) 9 F꘬gA4) !*T B V_]{D7,Υ'Y֦ O:̲{i σ~=d@#I 3@v=,tP+5Q'ĄaU HЦ X1y;9i7aI*7 L@8hd$(̄y?p>&I@ ڹ‰XcoǬ(-D&)ZጌF$+V)ʒ$fFe#T] V&>~`89€ϡUտn% e%XQc fَ:N8!9`;~N̵ڨuwcaj=6\u3oc)+?)2P9f\dp&,^΃+V*B=oQ!VΛ%UYF@P:f\,h:Vp'M`Ƙ䤷KH)(UʳhJl1o+;CoĻ>ֳ._yȒZ]p}oso#Sk6[LJ娴Gs܂3P!WR*,t14 StUۄ͗igJ߂. s;RI2) >9&29h齳2L;gnS3(6ӜLz</{:6 <=ߕ"7e[v{"'JE֡ލVլK0:Ye-RL2g Y1CVf 4ߎ9 ǟrϚ L|&-JtDʆ# و#$0 !`[E}YdVZL X/8˄cGZ2Z$ 1 ޲mlz1.1(~\N}ҵQNzi6a<Ŧw-4wQ&/q,=kz;2ܺ2ǼL(5G PeLp"cѠfwp M|Bj.B(b&F:ެ Hct–+#01' Luw2n|UF5ĎvǂZ1â2Xfy$X yk}T ЈDܥxlxl\S*Ϛ>W%}0>C4ZV1u'2*I~R.'t|!t l8?Y;i *0+eM噐@]vٹv"=Ҿ/f& u D1KEܐ \; KFelIJA,jזpeBZkE:z.&'z›o/}qx("9js P"`X9$t!D0]/#% Ja(*%4g8϶\B%ĐPBmI3RdB!-S z.TVbz%Ԍ0p/jaAUfW[6W|lE'3$Igɚy<9:dS sd_:^ҵC׋/u<;W.~[] ˩Qmzi'=ptyoVv1q;9c"pT{:\I^qP{7:&ޒԨf jh1m?> 1~ћoߔ_o^yą;:,cɂ4%V6kpݼkw-ڠkڜl7W9~qRzj,"h .G V#cՑǷw8U sqV[h;s9}~5?j! ?;P AR11礎tTh\mF?9JGQgB hcJ2:֙TS.W*$R֑$jVk-$@PgDރI5:XGT-x%$4h4ze5YM\Bqrq%6[_ki&Ffeǂ[SM } E6WQ#դ,c.IH 1xA imQJEUMF秹?,H,PTNVUtN$yty.hh7*\cZGiR9AE:۸SZtOpdTJ U *{D@mcU]?Z[ ! `TQlegc<$vIKXSPZ7IRC0uz_"c$u,s.fdnPfkùtS =<9SZ;jLy~Gb]\yRiOlRib^]@ ֡O )|bx9fs:Dkgۯ~=ɱ=>m^ˑ䬩x/ rrN1N.~/w^f*e\Bf6d%JnLS>'`J&ۅlGLAqR5y(:l谯1z1Ί5~s^RV^@5ʐ '.dG&eVV" '6Jq$J Ad^ )'+*觮UL୤\Cp eͽoWiZ,B yIhմZ%siwmB {Frל,쓗_Ҏʯܩ;hG֑E!qG\]&s_ gH1^F1ߵ4R];Kv]3*|GϬﲲ,YZڏC /K[cŰluTRZX)gŜ^ghg\$vzG`NEBON@5.Ꮻ f7/~>JM}1οf]j?&:mK#!rPJ(3#z9sS$mpYAEVyp__ ޭsw[^'t~V!ZPF'dx峉Sr9R1YϢhRNCTTqG.mh;t#c&}r+4D'7=ɟ^P~O:PnxHa6]d{8r+JDl m{blb‚*H$hӆ Xq;9i$vIبL@#`(PTs0!9˄NS 5;uH됶M=kmw(OӮ+QPuE[;.bG(i=beRYxBa A^&\)$JJK'imκfN`- 2\|$Ba͞XLxťV:!N:ɂKpGTR< [}H JAƼ Ezai` 4tzb, 2]Bb_Bpu]u yhog@mBip DpI DqAT*HFm2m7U>pi(>+rqqs.AE`j}ԖhImLl 3kn R)g‰s!Vϥu1Epp\%T aȐ|@܁46KʙmPgofk$PiE D yIxx0s2.HV _ B}˜>o:bH r h E6Z3P," |c#׾Z~EcmɢϸnÂp3rhr7B0/ QhR3W*JPn Vmwv}nk5sV#>dFt}ڹ( QɈZj.'vs_ylA.-VØm EZ$Ϟ}[:pxADo@$d=5\yDOw,pὰ$(Bv`x[omv=z4Ґ|ӕy׬k]ط#F0Dx؇gSoȇlH/D B$~U'-8:2ۘJ""[zmyg ML9:/yMz}s(/9i .CtR+e< ,m)Mx4h6R%F̾q/[vsfֲa;g>̦c|yt鮧mٶ@wގL=)"sZ;?G+ǣ< 0!mKU,Q+ҡj`(l؞@ZڏCRˀΙ`PUhHIM*`JC"QII\D٭u19ݻO!G$I66*Dܝ n,깘7)~|/~ut~L\SJ8kdV"hyAm 3n2yU`)U: gf85c2(T)v{[܍v h8|gW$uV>td_ymztCϒ msreKºYYqpr\[o5V Lt< ).ԇR /( 98[HaM8*DJ3w>.n~9ΔE_ȶE8ڵsY}x[,nokq2^g|ՍC:F]ӵE|'o$Oiu,"!#58ŇYnީS`9K" +"~jXOKie?|ZkyP؀J*vjقjM$HʙZv@jHi^OǤܼQ&kP 4XRV\׏P 3zs9 /GDN`R W!2H0$ſ:;̻5_4qS| T*X _K'sL̰qCji[Q%aD+)n<3߿:OPBߖ_8B;ȍaFozr2rNn /u֗E0R(UEusW zpf>$hPIsjwop>rvuCS"C ”sŵ1Ws~u:brZxl!F`37Htn]RYmŤ ۇAHӓ$!?116hY>"# ̳5pпMxYor,k#?u6Ȧ6Uv"prA|OkSZV9tșJ} GgPߢ*)qNLr+g8,\Q"gCZ 7+TqzMr_ӛE蝟"H~_>{ˇ/_ e˻~/oQ2E7G m$7e0@z]$o ]67 | Vg5xwp-ɖe$ԋ=x}m6Ϲ_Bc}YUŪGz&/^Ϳ*.>[i]v1K0= 0J. 997S W}N( bd#uP@:F.ԙ10:uY]꿝2Ӽ {6F`hK}&%kr JF[V I(c9/WA܍f2`cvY ,xt<sK^s>.!37d"e"jV#wY?\r:ͩ!Ws[~~eKYH?>Ϳ܄^/AhLUb*XX~_H=tȪo}vojy7}}?{o1pf8mۋm+6.yolލ(V_]fG 4xUy6"bzcv|X,J- rDdc#+T_zi/2]17;g{TwW7Uf]`[7|q2՟Z$X0yITbCЏ5?t-.Z\$L5(MF[.(:XwVbF6WZ IP2ђPBuY' IDK$cWZcAQr>͆}|}z~KՖ ? 3O{&wooWFp>G>(m_'Og{0kud:̨:/D2luUv;2_Cu bzŋiW_(Pr|k5\ߡ;s?vu[3A>+^-]/]_^}ƕS 7mV)*T On=&ox[wܺں=Otӻ6[|.;Һ{QwqC+-7Ceoyus 3"S-ꞝr-/Jsؕސu'_65nM!4(4_K+7w}{ߜ9Wύcmr+:'\P򫎖\/NO|VsB>i䳾;ԨK3 ҾƏ@}B1)D ] Br ]"dQvBZM|Um"] 5nqEcqxY ϕ`ԺglwzB3&|tl_. Ӭ͐˧;FY (,N[O0LJV&Ì)JMR֛(BLs;F55iggF +*lcYZ|vMӄ؊ZmLZ,+XhJuB k=($R3&6^5ֳfQNjB |T|"D$D.EkAd= `qJ9IYI2JNϊ-e?@S}/XwX,{W1+~H @&$&ʪ X&F j)?~qiiem)QD`? B+)"a&U;P %QHJqhpX3x'l``SAK+x#@W(1CȪS 1dEjQ*d֣٩(_395A= )0~KuC.+]HŅuX$*)ұ,/nNA;IK7-n~UÚd8Lt}wp HMl⽳{1WϑpIiG~jO=>5?,/`6,ioWf_KU8_z:b_NT]"Q|ANFr{ S=S=4S=S+RL%z-%)иsWJ=MrI$J2D {c=pN(Ͷpa7x,2mZq ;fSձstpڬg:mV8zj^#"q^n_-r,C :T0x)S'~$툌 tF}+Ef`Ty2~cvDZGBZks~XRujӟCݍA*yjb>JVv 1TrѸDDQ6zuJW~dW(4;SA*dVu!ڐ$cR\u9L^>vtokSX M! 3:d7Dqb)b-K1OddEeไ\xdBKJ:kP++W e&_w,g2+\N܇ߐ&^cH_{5{abۉ'F1c7ٯTqO™(Ơr EFp1@*[PPT.3,37D޶@;/>R|&qjm`^bnsOgϖ{dӷR4dmVlԛQoNRoER&匶dF?:*Y4:H֒(,XrRd ~Tu շ|;c>4wy^{^vU& 6:K z'ޠ /͸krR3YQ*Yel2 f+}'7 A澏heߑ66@lL2OXV\ YK+!b&!1&/6hClԱQޑy'EE{XEvXba1KDdLBGM5mԴ}jڛְ=(ŖyuDT7O:b~xVp>[)i@l6&_frrn=VuXwL)a?D.)9y&(Bh#%,ɭ|oRzĤt-kE(EBo Ut>a ..&eK/Ύ5!Ȩ))% R>o+|IB  D1s[4&|EBn!e)2n6y.&/UctGĒIg)@: [Fb}QNIǢ5()oWx-O*e12vT"QNHj%r}Ͽk]է>]+rt]: 8͠q;HہޡS{oड़=iw{ϱf`!90C]T2tF<`ş[9ZmShc7YmCh.hX0R #7Yn<A 0Iw=M0}χ>cɟ~mC6)o=|NƘ{P"J1Ȥ0F @HrT&ʻ@oizd >|q`{(AQ'ǿqBJ{oX#flIEKtDw'sT1uE uwQGߑ5L2tj?7_ƽ#zP؝ޕtA[c2b3:MĘTv {g层{ %o6lPh)+tf=~*EӶv#;ԓc!}bM]8?wgkP>_N󟭽iDw޵#"}h"lv38feYHr2_KKMY 6dW*S`RCBj<- B2g= ;G~qAGrΔ-cɲT2`k˃ahZFM`c >!In$/Dlφonwi鹚-w)_?e|~QgiT30* a>!L"6u(puS9AǹqsRyNGoy6 s!t4ғR|TZٮWǓYgZ]{ߧ+z@jg&[SLt8MkMonbVp!)nehLq 1֙ll{q![a;x){ syE_uu)HؑL\G9Mr:yJ@ddQc1Qg`v~kcY.cf1tAsCɩ{Ap uew:AMx2Y(==R rVa N5 4*:ke< MA4,6\L+Q=w{D}B> ϢTFs:1CٖKZeYz`eԚ )yL,k sYqU6J(X;Kh) 0d[;m+ "C8S?rxѸ}x:s#[ PGjY`ܨd8M!A"zV8O84ޥ\j4@&r6s  e4:0o 6pf= B77,1'T)l ئMɁř!6MCALkd{&#>;Qxi#Y(^JXjrV1tVE%cYR Ap} BVB҇#Ҩɸro4΃w_Cpu*} /x>cs$ZĹ$/"r,'ߔ Hcr"kDg!Byo.Nubh}ׂ l#Jo  Y ƷɒP3J6 mD9*T +uJGgp GOF=CqNZ2S lϬpWYw&#@Ҳ:gEt9! d6:zϼȡH {-X/qo' ,! 0Z0*d"$ `89 I,86N*o3*TW +>;QQh}_\?"^E\( Zpys%X""8\|+'qdI΋XK(G%3g\WS&Kmq{V=ԧ-gĿ d䒝xS<'fqm5 "#c<2g,]k^u7} m x|HgO'L?6R ~\֐,ffOZK'?5U:;E\Ќ`M+Z+h k(JOcZ=^ŒcW=>l>]ZDmլ=b̏W;񏊱' I]q{@1Q4eJ~1}_Nl`dS; l[=Eø2ZO_~?NNLh./K6|]j??Š%i2\L[_BqDGUe{X|5%%N|uF ݨ!Iyc /HI_>~~|?g.i-R k.IO;hj=|jG9ń\]s)\3`<&x Tv:CG&V?mVTsbFY<.菛nݝ]cΪwkdb)ULZ#- O엋˿R'q$u7r9q>L CVR؂ QLL. M*${#KKش1FH!<هGnB;#Ywm Ϝ_Xjs)KQ*䢧W4 N"+B7lBMqvuD\&7#lzb'$A|=KM [&J ={4tLS<5.FTTlr9\"ChRƁVMVudytbupgU%:Ƭ'E=OG{i-瑑csȸY%Hp!fLQlM"O:'ַܼ2ΞGB_37K " ډ.hUctOMy-\L$Key;R-xԅJkuD tңǢ-<<}ya(EM)oIV- gpxMFsLIǂIBlDfKfX ρ[bBSժjȪU5V'دV5O-OMSjoaztK<ۏZrtu:ƮBν:XQރ 6H=1CsheV:4k.c=eXY﹁5>='Dht`CLAqR!8rAX>9\}R@Zс=|Of6mǥg1I4k\X%IC{܅!`7u+(N"nH)廄^DQRKk7Al]Vw(N,eɝuO7em=׻t{<|oK4v0aۼJ29kwg6uSm L{jSE:l5EkbK<5Q[?,L=CVANR3DN"NSVO'!mLE%S!ܫ1 'l,Q=4mv ̧X}<| ŎDrt'Ԓ)8%z6j:uR/bfg o EV1!%y,awK쾒"і,vK]:uN.i Sx 2!wJG,r ޲ڼbË^vdn}Nrb'ʆH>󆄧fMܧk6DdN^Ghƥ]*JQ^'t4[fʪZV݅i/;.[`B|hc6Q1؍EH1ݻ Խd*Kwy:}Α{4sg8op7_=,]0sc9~7lOEzD\Oz 'uASHM\eHUL{=^vNO} 4! FRlRM-5͆VUJ(wɞML%fz&kI>[aORM9`L18.TqvrIr9تؾa{wy;~e/w-9{rk&6+:~iSXDOѴRz+('˪%f|ͥQ̹z4PJ*K1B̹垄SHy< wK3ٔ:Wx[g ,„k 䡸:%wh}9 -,_)6f㒩*&:s/N{pc=.+WX{Ӟb1X-uPbLcI@(rMK4W}NDԵc(9u}/xMwm5Šr o?Iyןb@^m]e@)W;^Ek\wHo{-A*z- Kʱ$%3 E3PI-,jPzLX07oĶq&zDiwm9I]FOq|ן{6VZ:v6XMQlDP욧Zif)x=؍wW<G.*p^~olr2?!l4^\)9H! [Z*{̒9>~΍>A[7Z7}]T%ήժbC+f<nBLAj392AgT3 m;9t/RsjHdX kZM{j슠νZ޻2k\kc)lY=1^h ss+۔ -zC d97G^;i_jx틍?t׍ccuœ|e HPW^`kK1$c!/u0^<7u rz|zI.:L_Lf C]umQo}+㓿u!.\s~ 䅱 }%cg˽ufk#驾>޸-t[ҏ)ntLkGyf f㵾=<:;y"Ixk#SF4mRcl̡O?l_Ls|^VfsiC+d<]o,ƟpI~Ŝ7<{=jWͺ. 7څ%[߻]LmbX:_lk﷋Ўz_/^7լ{&p>Qlߜly |~$Q>iqq^LPNdzw)Mm/q@'Էw(KˠX/7""؟hпK;d6?juYpz"& RK. !~w~1I->9{еwc|RJH_qVp<ڍ8[LkƞzE|Dc(`dV^?0We!5?lpj/<>3 9g˲1rTv?߽L[sc]]m^8XOBћQѶzlc쩎Gm7Pw;^y֞7G| w/_%mPٶG))Z8x')H5`O6gJ(9DwiYj*r%6"&!ܼ7,G kĻhb'#]?ydf`?{qdgZTY5ۢ*%_a2%RpRR^x.qԢK[~}f/R!4ԃ$׃d| jL/1 pw/dĀ`4v碅7cb\RRf4*IRa mTQ֋UnJCR1%2#`#iq079w1U@ȿ-`B9צhDUҜz+=1{L3Yp- xq 7b _Wo䔹q#hGC ,CykxRc=n.`GT5N 7֞B9H%`M%#@[r 0QO9[L<i\*a3k,iL͋ͷ3Im3D59ҬA2{ fM5űXdY4ր`5;h'Qd#o ueū`4c3=mq9P,i,b@L.,(Up AJрk!N38:]8rpTOS SU `yk1 x yuqk#%d,qCY`\ZB4h oKkI`<ԖI CP/kBBRPj!vm2fHґU 3 Z[2Xril8`k+S-X%C + 2Ad 6ۯCG9[((XW0MZˌ&P°S#,k}~.ZL`;Z!fP57讬 G C@b ahgA$ x&Xt @Z:ةGМi"ܺЭ]41 S Ä1(?%%1Rke$8()8YLC% ecT:2/ƱsEbE@IrLp` %Rab_hȒIǎ 4#}A( ~J496*VL^8MM3k,ʻ86zB wu;k/F/2*f Q@1qLK U2Y)GwF[vZૈ`n8VZ#2m'1/ M`OSuD!ID.h W*5wL10{c_(¸(۽Xsvj䶕 EK 76fpG[ d-Ho Pn%`K m10@ww8MR0`(ʀGJpq3j>-l:SAb By<\m7l +U2 @mD+T0kGtb8up6 (ƃx m$B݆&F{@E{,7U*(_ |vd fP0R벱 cunz~Jv6˴'GosP&<H< D=$ qu =8<8:s]Woi5 bqA1Xk\!"euNhmah31&cOg#J3fH`,DtSYoT`=@GB&\+Y:fG`=zk P}saql [WH$j閠Ձi\r)j7/LuP87dQ(0SU <C6)"0r'LA-0g`2 a+48OAOȭX XSXAQkW0x.zMɁLP ބ Wm\p+'<[5ܥ\(o$I.vWQ /k@(R`"& 9 X&/73X2 N F:@8N՞3ol 98gY66\0rn!;V;6ne+ }i{Na Z-F%YRpöږI8֮v䐜7k2&H C[?t۞w[@$a>@uH+4;0"VEVy,d ɒeYeS9ȠAs?szy289Y +uN81]^jov"LazgOG N(UC  $HDW! @כ ('*g٩q i:;.iU_0+GlP_&X/]jѤ=ǔR9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘐cB 9&䘞.$8&1pS[HDP䘞FQ$@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@OC"@`{@'@@ {O% O4z$!K$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$8T dpH W d,$h$" d-I $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $ tsk<Fjvu^VTj{}\\lyc z}d{DF9 p) l\qan9p)IKIJi\z OafKS&ԀKu-K6DyZd=Nfj&KB` 惌OMOh0oK-Ȝwĉ'YpIal-dR~0tW0 J'́0i-?t{ոLfƮJAi?AE՞@%G@٠ >,M܎>ѿo`UFa!ǃ^주VB i7zvq H:餔:>!pmw8e@֨g'vTuu%9E.Uuz)-Bn#,(j.ܰ,4` /&؞_0[U|9R`Lo_&j%d${<?9 ۡE:(Eq͔rȄ9~}T7]j@p^cMFf[cU28e긡q|ktZpP-t#Lqtх^ֽY#VAE1xof24mb߫!IZI E[Uы8y55kμWi-}i:_U.rИ{#u^;x&ς'<>^t5#d($,%q $t $%LOPZꅋ +zxf%0B垍 G~4ydjQ Ç8t&n}33wC7B5ꮭ+S٣W^歙Jl_]n6fXi6YJJg6,&T9S%M+i 򷟿sw1Tѡb&!2ΘPu n bY8卤. ¸ J.y& W1) %<.~NN<жDd %_Es݋T[ ´4Wrt%y:惓c"!&t,'tGCt"1 ti+d skVY?ɹf,NR.MzJpA?. Y\k {87]66ۼ<3,[mHZp~7!}84j_.u߹ߥFƑwZ^L[Weɾ-N.\-+oww멵ZՏKCUj2ϼ;kIڼlHvTIt{*MҐN/)4Vܲ%wa#%F2{iV+.n?͸' 8hL9sZ6)J4SRiq1XVJ$7s=<V7xsYS|Õ)S*miRlD(xku`J9b@+Ö0xUdZK^z R di7B##DeZ(BFΑJZGD;&O:H^Pʭr홷4R"Ebv)kaT)/.dQRD'"j#ۑq<҆W`!&*Xhj2[b6(ϟ#.{GBjYo?4+^C-L3R0ED9 nXQ$lb`\tS g:7C.+U: _)',~. <{#jH)(uSdM=ep,R,IȬf`gIFƸ/XQ mXy ]ZkuIHA邰.E:kk3r1]X[cLསf"5r7[|jByP Qbp.1+p ~'؅>X%bON7(+ :C!WQT GwvkPc(l8tbJ!{ Wbp6'%=sBBSH9S |nYc 8VmRH㧎.᣺@ao]X/!n]\!: ?\qU\{ǁB+8`QB{w34d<Mʑ=Vu}tx/-JDxra,V9 ,Y#X׃ǚg9]xigrpA23Bʥ#Ac6CRST^RR2anS18-GL2E^%_i5rvolK0$Xl81W?tҪm:MomCG[wpl@2OU!vіzβ0s[yݧ]M9JuήL䃤M g] ]]m>,%d@ցD)8$t&Tl'Zװ6TR^NɄ+댩+hd"9}`SZ駆OkesVrs!8PgKqV* %ur*K&`pLmB {j5ن*fo"'WLW[c=StdH\i{FH*!YE1RRBC9EF/1^AZ|tOSߊukaTm0"6#-c%2&it3\9xN[#_7-WlPEt"&|S)R^gx\Rx{tG1XQe#^ՅA5]^y/|iO?R(Mʏ.orPg:&E ^DPQfLYpcrYDmT`ƹWGϛPWׅLº<6hϠ7`4xq^?|yvz*ju[q0+ӛNoRo,Tu,XVVр6j-@* Pis)!wЛ;YWfxo&;T.tV؎135<6h=^n:4SS̵̺ zů)+raӉ;|1m]WnmD}Km;l1Ew'%oۆ+:=tɸ足rd֖;K80m߇cuM#h:s[[onO:qfb4w_LFcn;TL18p_vOղn mD']kƹ1[@y1QԤq8p+& & g"X*Z e^E& >:@̩#3w 4UU WVYn+n<_%>bZ:Rl˅c!q0(HcrtN$n}:X FnoL[c犔"ېd6/y_L =!1ȕkLh-rv\`,Cݚzz}:ѪUEK~X#y4*#3ȔIZv3TH% 1Zsn2*3iHGN,8sjH]>:EŕpKtTۖ5r[ƇgIƣl`M[Y:L vˆ}'_&aw? FX60"#QXR3 A@hB+lԙ21fl -){,hFNr Xg^RLԆe |$R:DaGJH)&$0t3 %YF(Jڹ64ec 3 :B$)"P5R*NJkLj4>`:aCӴN:M;@)7کo'uLTWPѕ8?H~\>"Co=, ,JSҋ2QV)5Y.8h & 胺>(^̡O6t> ]IylQ4L^ͼ-qնafuڥ/fW~ 5ڛx8iݠzɕ+Odz4v)4 Rƣ,Ks5NytT e~wc3E}_?ǣO'}ZWuXFeB{>yr0ښd'co +-Q -T+Qhbo@D?ѧOadwϚ=DSSLr_II{uEGğ~N? ]in5L΍"7E*eD㔩V,-XԷr]+ܩoZe6͑j޷?WuΊYpBN`tp!WSqD؝0Q+t‚KkNSӕ)<ņ{-;;_hv!Xqڅ\s2NL.TvtHx^dkM0@E$-1]9"so4LX#K]9͹gaţ>GIV/X_5ɀůAx_~CCPЇU?4 ZJ6d m(@i biIͳD&B0ʀ##mچ,%}]cZ=d8jbkIu[Mq?^~W'䇉``<rNj?y.T*臵j|eoCHKv5fsxy{RQs4O؟ 61y!EozѴz9HrUz@Zo?ezpc{+IF+)*D XMWB‰v3uqzcO C64Mok~EmV3S=2}F-yʐ` 8,g{5(O( >\_6VDb )Gc*W".x=1`=!s`p:檐'4YUre3W\i=sEsfN\rA*Ԣ:vsUTzJ11YC!P_aT괵dVP_ޝaDl(=3+X&gcr,H,(Y u3|wly'>P0m4Z0Vjk5ln |sFpt%9M}7K%%P7nnr{XS@K)'H;T6Lly%-oZε)-8Y5\0-=c )(ނ㻩xV/Lo+CT+6W5ۆ/s '0tI!E܇8`pvoKőINƳ~VK$K%ǝfz,VrQ\h%1V1M9{X BR1`LL9KHPO.VCnCgFpƀ3E YĐ\P$-.Y,FӒ4[ $Eŵ839Y8I[UY ',8&M"krl` HF8֝qf,@QexBԚg":+&9qH! b'&v(ew'9[8! 7DBdU4$A'E21PjT+ήI'wy -Vr4B<0C&LB1*<★I4(**V=K=ɮQ H)vMeZ1ub9KF;Sr+MV648,/D͡VoFv;z}6gȹox$cb 9t>)TA4{tAL&zXRWMBEG*uҽIi1_'&MۤY_f~e/Dϛ-nDɚHRc&5>S)dQn&4!֗\!"EרTn\ S醬oQZYP1~MΡÂgyA(~7nB18% 1z%JˈdEViU 1Kk9LdsȸY%KHQlMr:pHHݱvZN+3@*Myb}۠g%hYukMW9+L׏*oĮq`G'bGJӿ$.q+ˬRFyg#VBSJaFviSF,ͳdݎ[Չû _}j?x*ELq]7Q~qݍuju|Q(zqX̯dsHoGիi[-H>oqϤTxz{Zq:Y8~}էQX`jݶvp}klu"eb/ZAq/f~}1 }ɠKpVw wz/ g_(d2FI, } Td.^ V"Ȭ}]}!>Wa h0OE >\]vX_(Mgpr@ ps%))Oe!HRV8H JR$p\+LQQM1@LSW2%9nZhQͷ)`mlxɝ%|:|pk9sȳڃk3~bh7jzDzWUSSO2iUAjX dmk踂ӛg;'-h |tp l}8IE%M3{44#?r:p{C:sҬni|wj6njB>Mt[o3GRzƋ==Ե.P-76i7vw,ӹi{b݆-qOT遱.N/FnaCf3o4L|W,j:8&N*̍\w&h yL161FF=0% & !%+Xwω[`B͟SFjg]9m> mMEthݜ摧Pv}z= : ǘP=IߥsJc|}5qAk.sJw~$O>߽v߈19]= zDb ːq A2'sNz)o=yQ~̦uԎKbh๰J %WE CJ9)28/(_*R@]D5/%"*n@H-UvҰZ!ULU?;䥻U*hϷydr֮Pi[;t!=ߞ)[j홲h5Oh LHl-#Ie’ Y αsUh,dBUjOF,% NB.1ٜL9gRR5cl׌YIta5Kwԅ 2؆mI-3?;?ٮo=.7h;*)"E*BmMx2aZqJ9 nieV4 I0Mi \L. P9(Y[YcWvnsy.Zt쫵ee-=0*k>b4W"mQ"HHP(l,qH}Rքw1Gd󋞒06`lcrF."Ged%rRH!Z 6Fr遫(BT50ֳ|3 m@ڀc"E#?߶ЪŞY(ؤbV`+qh_x3u{hlQLc3g Y#^D QRG$WS2XU FPhCsr,%kugǨo)ȀAF/},2gb&ԩ)Q޷J1I 9Dr٤?P:,ThC][H`$6DN/H})  L>z7Zhw8`0\(#2edDsQY AХd &׹2?]c؟ws'QvNh,`*ӂ&MJcxcW>ڣ͗C@ZI䡱Fl1 ȼa !%dJ:oσ <ﵑz@x(Xţі\O FeF'w817uɼ'AkpȅzdI>T.{u*>/0U .+Y]V1JвYRPbov휰kxihM/ԫVn-ټbW6᷃s"FoGoӛk\x6ݖrviJ娴k.sYU. WQ&,t;D1/o (=9E`碞dQ^Big1J`e&ɤwepI%IJo8QmFK,ȜHڷien22w{W&|Z-pt}S@*{ƣ+&+لn4ȭkOn}g{S FIiexkD'f$,OoPa!czoo uf9߬+ssQa{v;3:z]LB/7T~)]+ 7TKLG}i(ǛAvR?>>RN*x~Xo>nol^wOpFh{f0[vj;K?[ݜٽFteEtЄ?!@F0<:]U6vny9,?,H7vnP{qo;KrX|\#5x:W~h>{q;Fވ>^^n)x;"iN]-sƤ'ۛ?*U?kieţɓ7f5\R+Ț5 LjĦ%Ca#vtVÖ%1t)[0Q#Ơe`7זÒൌ>&/T4FI;JgYrzrX0 g'㭤&f"g*lTΎ^cC添sZ8 5ڗmDۃ֍>F2=6;ʩ~0yIQgiT30* a>!L"jO:R;s9AǹqsRyNGoy6 s!t4rdΜZَ<;f/^,^uL֝}V~b;D5JeI^^MgD|v iNNq(DdRSdcfX< ;x%9v^`[u G.&ȉS&&AH IyS xӂa22H RYڜ} e5ӛTJ?7\*DaOGR$!df l&U;QR I8DP*[ZY)PBH,iQ@D \K*'N^ȞyzвS= S/ez*,Y=Gˬ$:ɵ&yXcd "(DFLǐ:1K9Y<{>[2OJ3&ۈ[pB䁅ጁ֦Me U٤NqPgp@`Ph+@E`9:En9z @Q!/3txg!ƏcG 0փug2$-8(șY w%+Ч ׂ ?DK'ZKLL2T0~$Y'A  >5~}Õ-7BT~G7aI~w?r[A6QNWʬak'Hd/ >~( |}][sɒ+g[&D rXG]m-rKωխdY-VX]KvfVfVe~)Sf<riTvmznEcLW.V "Dd0AS)( LYs;3P0Y]  ΊN[lBpYoJUBt%dNpoqz(4}uތ>3YTO2J̩}4DϞ-r (5]9]ZM&4[,m,ALOT9L~̮i:0#X?8^56LF*dsuX-KڮՊ B!6ʲGb|HmÐa4n_*AW: (vq1l].g[I>jȶQ[*fȇnМҏۋ/ rX*|K!ųM3JOK2s.Zf댕jVe:I VsuR򄂁Wj9(.5Ȉ=~A7Pe]L3Ջ mrыWmՖV[g~hN Еzε:78L0g(vpK{P qڽ㋲. F)bxŏTDF,ŜyFs}^hֽV"}%ƒD2!"1#`aQ@ApENY ;ACW4!{d%l7<~h{딜}).JGW|$S(IE/FQ楐89vK*LyT1 ]塚脾k`nF3#i##KL P f[R`(:o;cƌL"^ˈir`ij*%cxYCb6_[ 3M'7G+Su\6or:\`[T,go7/4yy{}^y{}^y{}ޣ4v_Ey9E =Xۏ}4!/=,=ڃK@VK,N!e\h=ƜR qR`T:tIG!i I\w ] 6q|u uHw7gؘ"*QR⼑r+a0BBz즣!KWQx{\ՇUf**;yF%uGI9MZti06q>E5ZLF Qؿ,\퇄툲 , )Pޱ€ƬlQ", bfJ/H ];{W ; sB6 şK v^ GI>7B)"rfKU%0 I<f%a('ƅR ]ل;RŞ"ȟ۵)qzٚ3_eX3S` NS70erP@ Hgw%P0 8+:mqkr ޔ K НT1$qQ(fi |f"|"uGe61"S (i_=[&G?10vo?p;!7-Mh X4u{l!az4*acveMY<*\Gulqa2R%["g5m^v%PVU(gXw V>G:oqVŸQ@DŴeCr9|8۲8MGQEڲV@7ŨE>t 吆~^|}Qҍ' _R<٦o~%匧uay-ujVe:^ VsuViHb$͖̩|cQRnUI4'1|c?i;9AF=GJQ`xx<f{?SGmT 1"G+lZy2 CxAQYOL=ttфy%bq9;k Z+ 0Br+ IIdZBurUdOmݷn'FaՋNkOƒA>GQj~zjmo88&y)~hN U7Wjunq.`9ϦQ@y$!ՎB>ג)Eՠyx1Ή5W^مJZ]O݅,.E| ,`k U cQ逰 90@SJx𔪣cmn{sŠTn2)t#_)t8)|}9 E6vi7߇d"Wz~w}Wo{P|*_%}Sc#Ѷcxд]/n//Cbj16lu2.[ioR=I%,@Yb27# lT..Kn"'Jќdsk,QU iW4ˀZLӝO{eU g>]nG%"`G=9 Ej51ne{:)۳%m]x^gNvHݒX9VfRq|&؛>dHZ^ ^ ̖`v{iD{? V >\~fWI &uD9Ԃ6>imw1;lG{##-~ w>oog><깿F7_l{qC?+ENJawD (T[xfrpY rA7+>j,2+K?7GӤV+1a%)b>۬L_]MV|ACz_3Qwf#vذhh[M)O yδUEkr$r*f^v ̊.妗,Yydc"LDX,-N%0lBn ȺMos v;ۭVtJUgزszyi#r j{ `xY"WG,XPXQepQः ]9ϩ"D#a  "u OрT*o4b`f2ߛyv1E^gq z7l:x]uИkBruڃ:$lq`q |\TU=W`x:PTߕX6 Ԓ^gy q5((dQ:?(]-իwE\/hVV4po$ìecͲ%?9j'yvzϒJLaF)U6,=Aq>CM5KbG+zxNL* H,9OqvT Cg?Y%a M/3U>dI fCYi]֮du.FPwSؖcrl宴`]vXaʒDY 0xKw % ؅ ߩqk_P+Lz1[1&>q̍R5g0CIE2R&N=L%U /F_g QuP2wd;$+-% Bbn !giYH D2co bh{S5-|^6'n<\|5ƸefV%>6M/WS`B<U޵q$e/`=RZq%ಹ``5~T;iQ!);NQ(H"-,q5UտzJ92"{1$U9{(R,C'ATԳ;g]l:pxHYB f>C}R%I$\Q1"zuaSVұPLA%DPjD#]9L=;az#~ףhrGu_ m[w!C ǮySX|w1ONY8խ7Ԫ{R,Y{$۫V!.]IMJvDF!- ^RzՌT(iCEKŪ2*X P˂;&VR}kȹ_3ލRDB*dp.|ȘE4V[џ^No:0NxOoh`0|_>sma,|:4 "X0G(, e"9!)-RϭPP1*xȣ {}QƓQwkȹ_cy^2.wEkjmݳ}@}b6^Ֆ4L, &ǘb.]8y2v2Qvh6-}a{Ag0TA9&4IЄ)٤ ҤЇp$ iI{HIk {')tک{mptT1ErovP`oXҘH>%$T^+ݢ[o=AE:V#=dݎ1bj>-C)OTȁ-$,>#RƈbzEH(f%CfޢԪ@ׁ/ Br5Rv.!iM>F/Wx)!ZMp%k5Bχu5r|å+|zZSj7UA旷$TyfYkNtA+j36چ\'{VyXj=R+`墣 ͔QHVc->zGqt:f9z1 #AE 'xBm:z@{{&_$/ż _[fTkj:m~}e/OgWj~2lTW}3+y9-j꺹%.`ڒcz̠,ًlvEΚn? II.Z$PxJpQE5FpBTPBTbrd=`_PUun9- 0/-iDHDB xE9%[Tn,Wc c$c 2#qTİ8N>)!*r*tb퐌jhֲ1{ mC;ǁOt\ YzmQNn}7a k4آpm*N oo+=JWӃ HESs h8Fg3;BY~ t<6GA}>w+k5Tt>X/4gEn~.αz}drZGE˨ق"/ޅR *i"9!Nۋ.ɮ♽MgH\ +p =7Oehd`0=; |-[k}ȥ31ЮF;~z2iJK,>gN:K#nJ(ViuN޳\Sf{Fhkt&_Zu(sgC|B_#XŎሕ)F9Rmskڄ%L1nᣭIƘdbPo17mU^k#0.&[35cP51zO1?Es7݄IqMNam~٣<%6̻oK{`~ѫ3/ zqnqͫEUYUhw~\ @xSp__biͼK2Ѽ!>%+?'~`QkltڰG1C u^]2bwG3nXFg4n.j5I嘚7X2f!U7-=Ej޾c)3y_SdSf=m4bn.A Llds<'¼euїxnˣI@ҽ.!X3g-*Rƫm!fg> YoڠNXY=PFߢPX[㻔C|CEs24WީSVzp68g>eC/* Gjm\975&3vw%cZ]mkyY}g^C-둒) ΡpYgðK>LTPKkhjtɟܖpUkݤۊ-o돿PsU&U)HM-GVwy3>L]ݷ㦹mNwoZt||E!J%|q֝QםqZ_ɦ[}u5{B{gC PYJA%p2gKR&IM e(S:ﯘ7s _X%/yݡp &+rkD22 X\rW f l_X$%@]/&jilXi[%S=T~>UGԝthPva]N(.d) %IacQ$0y^gYCt(*q`'aky뇤0"# pCU2$T.0N3I0ZErT9*6SKt?R*QZ—jl@0.Ͽ8q;6fO6lG- +jN=Yk2K!7Ua_D_3q鿣\W.N#C}+yZW c&ŌA9@XJ#:ǯi-u:1 +4ta*G n'OYys":V.bvLy}C),̾dZV23+Lwjٻ6$W6;#Z`z<= lcqi`aTn`FI:X %V2ʈ"2ڴ],d56uJ-h̿/ibټџ'?~^}n=1cX|&8]]/V0QyF_6p$POk9 <,SL[5]*|:[onѧb^`G]/kԎFݎ߼eeh%/~|YPMOmcJ(et_Lgn,⪋Y)0żvNt>~U]tRZ-j:Zv6,_5ɟ~~z*?ן?\$Y'z/e*)lʡS&^ǻ_ZCiCVsի g\Us 7XLg C­pGDo[uhy1[SN Y#_/{.!֫l3R$RE4cf?߷R3!X@XfީdJ!m30B%d&IA{mȚ6-G +Զ#MsK(zgt֝Zi͑U0N\B)WEFFq B**׹N\H3q06l:<]^j9HЋ\Ew-6yIXN o^W=!ޛHWIN:2Kk94ˌ{2d~f6!Չ4z/L" G/ծSWj -݆WnGRRR Rm&E9lƏi_Fxk8!:ke& 3@@*^X{bݞ(<>꜊ss 2LXYV1,4>x&R*91T*4!pėU,6?aak6\O-\օ7s:7;Jݕ'͜ș~cY]  =-s"補^}!o\,W_V:R "Zq!;7@#E%N^1*QE%VsK}w\6 D#3VyK0s`δa@UP q |A[Aen6oJk+qԵQөW+ 9X/m1O1uLjAոԆg⳺Q7`XvV4^^^p U 3F3A.H3G vug(,rs eHч8@YJ xjM26=5 ϾƓQF?ͷNtmΧ/6NWbvG+)ӛEg(pKW֩ͧcn_KX0^Z7Ժ]i,[os2mI3FanRˆZ6Ӻ۶QwwwfG-RwkO'_]JWwt|LyW(&-Gsu}#84ۼ~[|~^C?w(*6'}NU!hV5[{>il{YcCE1׬c 6Ҭǔ#K-',qW@hNq:+W>/V1xfqmY&==9]|LK;vyQ^1[JOoG_Iҳd֌+㹰  %܅MJ@04Kgd`ƉPk" t1+,ՐQ[^D3#A` }F&|5kgIC[7U\ԻxYyWʻS*h%&$0mJ.$;0>( $"vA_V:bUΨAX)ΫM( ֓I.sM ~s#1ڜtvs($8d&/onuSZ>omy/4~<~7α!a"-DަfH4JZ̉EE I3 ТLh%mC|L (lSjKhHFd=f pq,[tvs0 /\8k\[\{@NKV凨1lҐ 9lפL&Qp%7!"b$:+j[$R./8#BeG8b fl[ Hm $wHڝLRΚ^H`7jX;.T?P@i#EE;3mDi^,;5BRrpRr mYI=(r~z{+jܤbpTMtۢ(G˄MsT6Ƥ5FIڗ|Rf 1'm>~HN\&j݂xhFWyJe3?FiT9BߣЊʎ"Ue*Ԑa4"1LCƘAGiQL@(!T3+tvgpBf!tZ\MKS{oIJ$M g#Ўq:ș[(^z4NlcR(PY`mj2 q'+YIg+}u(l9/D@xY)Et pF LܤPb|$0rWz>}8g "BE/b!`f`uQd! Y:h): jљV5>6IofQҘN_'F2ۂ>v ½n,06ώRu{6Vmm{뱰։sJ3JO6&fk:MYC3OW?0d%VRA,upl B"T-t4ǿ]dK2r*fOhik|l5%G/̒ċfk\B cMw|&N LZQrR(qF.DVfnӑYjNj>>Q˳j0bn;%i` +r[dsIt/P f t3M/@cJ`2S* KH M rRA7E@;Uh[2'Ι^%-jtJ[5ұɗft>ee>Aay:|YAW'&(^( CM<:SǑWz;EԳDjm< 'H ?cm Whz@4~̋=0)]9 lq}8’_3m٘9#Oa%>6h%N! {f>9Jf6"eᑧ*IؠU{GbΨMڏxW=_e"ۖx5`ųN]!$j}V;8*x`@yS(j>8x+N*Fw&h30c{ch9t,$@efH=VVdF8x:9&D$mOժ֨ik TMU@S(B]ȠmAꙑApUw`OkV^v|^hYeceJ2ӍEmGP9t'*s 5&KѤ]Mb"8$M٢ 5 61[敇r΄eJ4j `42tɗH#'퍓p8ÔLHD$xe0Zm:hlhnQ"Tyw͎{)I xAJnlLGrV%KIIĴ祄 ƓC[;Qp3Co d.B :e!hqc jg5==CRC)*{) .2˜u( qH}TSCYB`8Af]u=j}b"$ތ E0`>b!'f5=:GԯYLWh&q>XpA H 4s:Sܹ(4Ad-Q9hG46˱ܛ_q*.U[A9D@mGKx\>ieo10yh,ӶM73a7oi1>mPO;#3YFS h> *&j*CH' Ixmn\≳B{[|sk_?T]mě!@R:YIT,eP ^D !cQ3/52@UwDpgVp`lzZ53؝s |;7'osM!Oyٴ5U#(\0U76kYa_vg!d 0lf/:%Gx~[˶ZԲ NnW$+gS@EWgoKCg#s %&2542gm:%DJъPH!q/oyNWC| ׫[׻s?Ul _a%\s?J`Ł9%+;y?RnqދGpދyW/%o?2yVsS1E5ξ(5m#$9DR9Lŕ PYDΒR|9|ӕT'dL8=1H^|SQQ$Hyj\PXG_wic3]z#mˌ2z.|q깚9ķh;"|yIeHLrFRJSԆ&8"NMd L9䴳FFC4R:᭶&H].ȭST)%hTvfN[ix-]6%LGf2>XE\.NbF *hT7L8ƃN2szI<(A)J^%)IVeѢ!DDd:!cqVٹResny`Rf (Hp "{ }$FO\%+?w~y=hi(I G.H8;I1'Ղ*& c'iu:[CjאlުB~AgK J:h]a >a4YFS1qDO[9?盲1L.`&̾req^{Fj~'fkc &$'Tc0Pl%99S1L\L/NTk$rԬ䍣H'FS:WK-k?f-Ȇ2uŸwikG2%?۫=۷7_UQr>puⷺ h]פh`ˮ++q (UkNէݽy5s}|l0J4蟝Ɩ]Y(BUv5K?GNHӓV}I\Ec16whUY>!8> G0bGb'7_:gs@6?uɦ6Uf9G >~n~O|.qx~tq yg 1gaGz؝bhM4H%f oT[˄JѸN WMNhsϻbr8ШDMcX8s4Cw>RЧp.3ISyI{mGK֣yG3ͷx"mz>2H*\nXMpAZ&fz ݢX? s ÅdF1y[D⁳Pꪐ\P69kL:PshM8g\H %}r- `MR8b5BB碏x(\A@-%C[Fef62u˯% 3y6(@ShM1g5?fK3ca2uf,2K|5ofTglB4, eE=`4̤4!JiNь  w4N`k1f:[/rCJx 47)XbD] Ҵ m;&ijb S悷yHF%Rz'r [[g˳3Xu}\!;bد\hǙ\ z2Mom}orJo`LE}0In3iM.u8jC:EdPquisܶg=9A1O9$m޴»+ݽyFdW7gGyG] }7:qGN7$-Usŭ&on߳/n͗}NyTJVhh!}hWVy 4ծ ک+#LT䬪+V>N:Ŭ:Qb"p]Muqb$05@|R+/RSwrRGƢqQ;C\J4JnbmjMC;?s U/W v 38~Vr}CyR|L: Q eOې$]L>r2AJL"fqNXyq\J8#`Gm1Ac!R$aJ/=D4IbP' j jKOJtXm0riF#H )A8e*4p5hLE`hqt(5x!Da`oSEq i$%F{Y#q뤘!%!%^ Ld[A2d~[_k*5<>*P:0Qw< 6P<?(9B\o- ! + r'3pp,HəĨv!Ă2L!Z"ՁHOG&֕;q-SAB}}2ab"5jR ),8< YN6&O@~M3M~_'Ggզw<1,Q?`C_3pS.޵q$M]mNv Y,pXG3E*9~ÇH8)m@hW]%ɬYsyͳ!W,XU(X(.CU&nՍ7ݽvVzQR6~nۇ+2ʸ9mr宅{;v;NwiVtcibX?!=bxK8_{^yu,FůF̈́wĒ2l#eC#;Jku!Z_+C%bz |5VԬ8Nx{ }ryDD8c{mb49)X0I!Jd@S ߒ)=YbBDQoH3GuAw)5~O;˞Qbkٴ*&rgϥUq։CoULTJjتZˁUOs65_Y?ٱDHghanTq@d8fkP`|_p6e$d̖S7f%[C_^ѾdeA;kf#;:=s\zDø P0)J.]ܦ,[Ck*f`nH(3HYSB/"ɍRKM)X 9O[d(Bqz!z|TYcŦ}D< LH y^IeI]HXʹWhfhO,tH "yLFzεTߚ7r׌w[){хqƣt{xp wJ:(@RGm ޳/}74~4| g_VI-"\JP7,6ȘXԊhT2(h6C̊> &CR)nK6e*G3zؽsqZs.Ekڲg-֮]oir$,C&'g g8"Ia9lQ~[Agm&E2Y$DK&!ɛ@t!E&a$ﻳKoܯ[~VhQUՈU#pF"edJB 4J$y!'W(+*UQögV%%YKjy\6j{Gu:?m\7'y{{G]t,dBD8< oY`K9_'u~v\]ޟ{S\$2ZIw!KeKKH`Bz ƀL(rq\d4_FWUFzA$-=2y¬GRtRd\"S@18F#XƓ RMRd"r+LoM.CRI($dɱ_9&4AhRRȀ97 GDs%?V2Vat!I!c_PY. uهtɱIC0VyhhM73a'R/ߦ▏hP3{\yfEoY,h 7ZmOʐ4!K'r,oY˅1#8-YM &|DRȄSr,ƅ,fӟQಒVfI ɃB` 7څb"Jr,Ƣ(Ux{ޅvHk,f: W<LV:=[ֿþ͞a;oH"f.=EdhGҀJ娴k's܃3R WQ&,tI<dbButqtdOX%_2[@.:瀴91ymfLJyQTB,edeFnt]%WWC2S_OppH(Z>`4)@*hk9Wϴ'$lKM>ͮW?7O>;j w[.{XB/~5wt||^{4.cd͕='˯Xz-&>g{,ŲvۓҺf֚Z3[kfklD5fVI5f֚Z3[kfkl/!9s'E+<9d-[jVq=lY5f׬U\kVq*Y5+-r=`4t/J\joBZB{ ejD-EA"Zz)w=$uY卼y䧎߬=@f]I 5ɌUo7e<H+G$fゎ)a"kFA'`ZnHÒh2l5B[cYaBBOuR pFx:$ћ |9MtǗC:"vg "nNlC}m+ty=kOy>iZҨ("GmsΖyT0XbVZ᳖IrC)*q :΍s{:=x! 蹵Bh'ʑݩ7r[@yJ}]]{imwO1 WTrO%UeIg/w}󛛚{p!FI5NYl 8 $ą$B}!ȃp!̠ud,8&Jqst'[1lu G-DdqEN&9kAH IyS Wxӂa22lFB/ɹgKpm>W_qL >Ų{j3vzT3vk3vKtW 1ɺS=٫R"A+BDZSBXBF9c'oHhĞT@n%rN0~jp&Cv*POj=^9Iv%RE$,KYaX<&'6J+O>+>X#Opy>x-WK/h}|k2D"&oY(sB46Θ \v R%&%5|ɥB2)(Aa9* ªhNNe[`B'A@#u {qf#g:d 2#FzggeׂU  {0XQ@4)hQ!!Ov C2$la0뤲ѪR2k]~+I,wmYڊnlIX?kq)wպE}|.4+>7Wb%"r@ 3N4?0|oߔlW7C.+m)nV$Sל-uāb#8:cGHe<ї9)#fqm Crt̙-yS d?E8n87V'L?6R~V,VVZMet|QÉӴ+]4M8Nℶ~4ZQʜw淗0#چ1c{ị֍[.1d]ZXmy[ďW;[XHkӾ>ҷt5 3{ĄIh*|2}Y.r9=9u9ɮQ;jfȇqZ_.n&g( _X~i'dz:q+fDL4 _HAӿG|0ЮP#5ĩ?l8_ _~}>_?r?~G{_轜c,J]$cIVmC4rh j-s֋d\;۵qI)xZdC< PN:n7qt՚xXߢV̩f󀳢ڇZ!N C:2 #t@R! _,5aӊ!y4=6 shcx,W KYRI^iФhuj.!BlKܷ'FaĂsO =ĒxA6GQ_WWmo|;JC<% yn. 62J()Bgj" ֑ѵx!04xBȃ2yS$s!IbSjN|P*q(B}ƦϣCYF޹VjO&`>Y"1x.0Zy0J f̒o;] 8k8yj4'}XUtQ& k y\SqO=YOYx}4J-JL ub>x/Iǥ\H}cbY`u.+nwmq:y>5(J9X`tct;,/U[jw۩v.Y`͠N:6 )9R{ &V}>ef^mp^@?=e?홍uI/NWCS Brr1_0ⴰvqPX 3X A4N"u~TjUUZ5ZAGU]R- blR $LsפOZ9iB]PXច>&d݆|h N2f͑2ol bnG~R;?友Otz^:Ut+U.ƣ5U͞'xy!rQPIm0qc#4**n5=.Qq(Z\ /X2xס֒PTrL)hĪU)X.zt`Mwu1|ak.&;lHgqmQ|w}oְC~_?&$얮]_^]};eBnm.Ǡ*6 ~!?[ ,i=>k=,Zo^bӛ6[8~;un7 w=62`<VW7gG>.swK=^ܺ%w*+v7$L㯚puyMt5OkNXXTm7n9Cw\4/xe){OD`C%V-j}T>TG糺#Y]'&+^"g[P -~,Ds *Ն`T^/c5[U^ ^;e[-̛՘R)(UàFёbt]>8fCF݅N֗3ki04w'/j\P1cP LÂ9U]6\fCREan;w}t"L1PfK(EbuٚZZuߍdq&02vlI5e\֣k%\YϺcslŒacIӥVk5Cj%=2M%1J,^˛/FV .aR*Nx+|XE*% P.C1t1dJjɜV7ϏiO:,~;n0kN誣ij$:)'K,걤\ҕm1P&pQX9NCx',1X[Qbp.$g2YT'Ve j&m#"5zϏEMcSԣNomnXxqb>bsŠ5+bdZn`ЩLz<ɋ[8cЎn3:kG7A?š01i2Mn$Z* a 69P vQb4+?O#vȃ=6 E[yQAV8$bay  6 ㊯`\ᚂՆ)#:C+Ս:#^WXg`] 1Ye+86 {_#n5Q֋AW_)_ѯl]'tt tD@Ι43)RG||3=3M~9  _%@#V1& E< 49rT£殷[9sMB [ )lvS]^_|*]l'0V[i]~qQ;NAv}m+DVAXKzW?(R8T J:_+Mj.6G! EUlOUq XEgC-9xd6)z-coȦP05FoE|BCn(5K1W}9H!+8?KkX dv#H~?((c)HChӷg ;DyJRE@Py 1b0'RHZ!H(o @XtTE؋Ehs2n'#=V"q.UEOUOHd$dY>h%p(JWà ٸ\ EȀ4lѻ)i] pDsonno!iO? {>`k\@}CJiF!-!-`C]|Y<_:ק{e˓YNNJQ%r$'}CdUm!66q r}&\s9T&LY&dL:8Ʀ_NOAKuM$.j(Mix=6OM&;{:Ċ X-%e({N@A FY%J?*}hFgw=CCۄz(ٝw^aě/=0(͸. wيXM&UspT _ĚA+ɑiêpHzTWNe#_ՄEQ{S2|8Y#W\ Fm%qu){~}Y`p"- N%`v()tә))8_Le̬x &D5ȭR>;yṭ3mmN$J̔qʰl)CKb 4Ps!dY@mMj‰]p6;촻g:PW=kYsYk1EQ<}߀x߿E@Wlh!9?T1I\St\n3~bjQ<|)FǛv]R=&#HZc2xס֒PTrL)hĪA{ pёS &\+Udu9G wz?{*SҖFaJh,!M!eMpAPL!P01ZQQmeRpղ7oۿJuDo5ޣvɖD|NKOg#),u8vT[w/g߮.֙%,6˜ŭDŽcO6|5l~uۏ[dv[] &##Q,Ȑkcb.M0wVRs"$ J3cZv&jm1ޖ8-fr[mf<i ol>n"9?5{bodN~| x~~l~-6,!kcDE_NX#-jB R[cx(X@@3Ƒ|uo'X=A,JewnE\6;kMgm&=!ͤI UҪUN֪9h/dhůb)qزIus؄! 255JbkJ} >Bob䃘 [QFq+EźN*Tcs-"v8Y"nfhpXifЎq-*^DHؙ9>}Z׈}բсŅ4ٙԲ84*J)4 HH-b7q[5!f%ɖ?_qbK"ihT[daQG-ȸ"Pa}3"FČG9P hTYx>ȹQf1E햝 S:3TXJCz ˬYJ'Hu He4ule\l 17fҐbAN.aP/P걇׮^{`#CT9fB Tc(P5S1i9JZ ؘM(6:СQ1eAetgH~)ýJ}Kzyki/돞Qh%_TEp>>.hgCr181 2Hr{T,@ 5gi( )u_x'Ul_qr1 ]6ַ]\ Ͱmu>&nϵdm>'.>e_7A^~>yoؑ ݠ37l\Reb<Qe'Ρ>Ro! 6 &sVQb#\KT:Le΁yQ;8 l>rr޽~QϷBnmQI7\mm1'ssI7XhEU'v+v4wulvixq+<f[]^b9w;Se[:.Q;E\zS}mSs;_Rqf]n!1!]|vAR"FmJr)iq5*IM!c>dTdYO޵6r#"}1/E  d/ x)IJg翟bK%٦m3n5U*%%{[7Ζ. 9!/~n}4u5a<og=_m.\ kW g^\rEE[\"zR(bPdNώOF!bst) ~E.k< @]_^dВB4$@t$PJd@VvsNL  >hŵieFNmR`RKĢH=재SVogiZQ\t4Њs*fA~A,}TLb+TmӔQ*҃zz21ۗpW!M?w`И>I<慫EJ}:H 1L\>zRU/T|şrY}O/ǭ9}_'7 x'9%K xVkӻ0Q<<7?ejl8Xf?[bh+}s$0wt44*,?"fO>źWz~9hz9%֏ɡQcgtuh~n6Z|Uw_W3U,O+㓣Et>YռnJs9P$>3Iwrr|Q嚋Z/.6-O+ɟ~߿R?|kR,pEU:=j1퇶jho5v(Zo3iyƸCc0"0ݫ͵ kmTbrU:2[ ;v `?Y9ENY8M/0?_y\z:IL)&G>$tN) 5-ꘌ)لQ̹ 10uY!a׫!nwXֽX+dךZA>WF@c  dj7vXТC7V7]#cՋ[)i 4+WVark;榱ů:bq7,cl՝s9Bvrl7S2QldZ^г5Sv2ke%CF6/1YdjY#Gr BE6TE-djth^C&۵L b? 51J/m+x/'mTKT \J^*fHJ"IQ1$5vjNj7Y%@)@i1fgGut~hRh9&.ɶAJ+dA&fȢW6%U#Bgm$^ĽngͦjT>+~ [`2# RZ;Ub75@%Q"K_Xr,VW"c!AL3`M he֨s*0bV3NwIh',2NK4AJE42 K&/) TU pfY3x#e*ޮ5ZeLh+x[C+?RsZPVI#sa4\KwEȮ)uHYH t! 90"dt E bcҨd"LBv z6 k e?@D ;Y$&/lk]n~ R:cO[t4 :a:^ B-pX1 s7gkHi EoR0 X|-WUQU3*l*Ih t6S[mv-BkJ3 )>s U-{/ޝ/9*/ԝo8bG4껣_^ tj/Z̑r锨!9)cpxJx<{S.~XcJǕ,wݷ57׻E7_/t ?#Mk|9ϫ+^2>woCQ{1߳/Twґ#ґ=X}4*vxxҶJ#Bmۺ^#\ۑiB,~u+)Vb0IQ=`2҈kK ?O9>9=# BψY cG,!T";Xɖ1UڈRZ{Ios8*)h+t :&OQZMVɁFdyb8߱}M59lP|9#"SV^M*:IP"ڬ*.inSTafgq3g yc<3n}ɳRMC3L4::;{E7nl|ęoإqQuNP@Ѻ#[T]_ib=R\.wdWۥ{&nYkuK?{gr[(YS.:rcN!F/H*cdJ ^.6w喡 O].`wɻS䷓y:k/u)kB R9,1j-!JA1I̹dP"D za'D8?YT ޠF;F#puV٢I_~P:j89ߩB䵛ڳӓO!(YҢo61i7Xn%Ի+p>ZȚ^mrFfgKrPEqTH2Po. iE.)2Z*FLE؞jeǓ\|T9Pv@@VZ4 Tkdl6ȸ6aFȖXxq 2"z+|yZH/N#6d`.|` .G(F""YVm7)t~ Y Ҫ¦CGX3P ]cn6͈gq+L:ں1jGd8Rd-H"!]J泽v ۵*kelIGRdl!33d54!cM&7Eʥҍ1?1F5ah>t6aoԯBE*0 "6""4FDqDVEH96bj0B0ȶ+[l TLXJTG%E֦N #Z&T5Ȓ-iJH<;hͦB͈%o KvZpz bC~"gG>Wr>2n1TB4znR5W0w uG鴍Z[+XZU(ŠdHH)P6Q%{%ݵYM_z`;)ϗU榿XFtwѾy!FOg߰F̏ie,c>GKs&wo`BP:P:HwDeI1tW-ܑ@l(ir0h6 $}LTWFiޡ.yۦm{ { I|w:LߕFvmTgj{ۺ_9mqs0Y'v~֍-Ļϑdɶd:e):9 tpI֓ _/n3n :IjD&QR+-]2>Irw_j^87g?,cNy"d6I8XLx8xB' .EQt^ F/5*XӖk녥!R3C$ 8$Bd^gqO|N Nzb;ع{o5K ӈ/a48Xr ds ˛ȋ;&_nh(gé6!h$Nb\l$ R1FB0haS]!ʣceԚ:mE.45AxgKP-]L5 C)ۣ)g6C )z2Xθs1sȐ|P oDPeKx0X#Q䃳>P\B^QFt1 ?""1TD2.HVު@%my% =ЀL(b<)מbK0']tltl!oSn+Zf)Z}u|pBI ]8uhrbfA__<:8?wk H퉶*FX;a D9H-lߡ(Yt^"@:5Z  jЀy* D%Ak'yp!nLdS)M@qLGplւDU̡q!18bҙ8kΡwtpd+IMg Z4s>>nˬ~dJIAꔒ&i5NV"h̫&y965sT&o|8o)U: g 8'~F3!] CL\p|v3i_i^7cY4ƨdp,Ry 4z, \6^UAs?{@5ݤ!J|vgjzZ]2就 kwٯ)Ϊ|olߴ鋚>ڣ?j6PjM9pQUXjvFVzX] ·3QDj8z!MP7/^7KIpv eVPS CX-xjeJDTp/Ex-n?EGVDN1 'dSV2Q]% ۜ+qn'v$:Mع,2?^犃U  랣C_u/<)īsBσV.էjQ&Wݨ]h:omJ\'9~U5ۢ _0\~bG+ +T !)AM (ɄɄPZC䶇 +Hn^IcIګG?:[y6<]vj K;ZǴ%DɼPY̫jwvM1LQXEs>=wGX  b8 | @Y)Ŕ]]ͦ)JhH9pVTg+b`B 1' Q`YfivPRJj (|PY2WY`ZTJ)vs,>4WrBnJ"*TUn} Ts͕!DKu:^h\D|89\?ʁ,K :9N* =#GZqNyu.zlQB; Vyћ2pF97^$?aEo)UfEvpQƐLF Gvۤ]nM>)NƜ|>t7U Sה` s\+q %jjOU["F#G1tGmN{lyY괓Z]WM6Y 3㐛~:F-,!lv&ߥڷ\?Vq 3ΒPIKBϤmajn#NM.ZTmtVvuJ+*U8So+U?;x]>ͻÃ~|¿{Cȗ+0*w׍8iilo47b5ד_]e+)3`46& Xn.{38>JiM|ە`૘-ézREI6nMLJL4`pwIx8 $e$0Cv( 29{t9.hc`*DY=AʄÌ. 1e=ttdj F8'T\)M_%68pY_hhBKʒ`ޗ lB`|~]n}k`9!kE\]n=-HilE=Nȟ1`9(C-b>qxǜh8Ľ01>uؤTB !ó9,р u"Z:Zt `3s$ eRRf% #!g(6$qcD|jx2Υ :$Iu&AU 5Ƅļi O]w&V,ʎ#gpށvt=>g6{dq/0ܼHG1Ag؏QˈNrK<1@.N |$'==?ǯ#}tTYU^rjCHSF0x21:D3*v^ $> ]RNkMK lbo^Pcp[*PHV'SQS=SYfkk3-KJT<Ꮏz͊ԜvZtKCX!?|K(CNhC*as+#0>vDԨa3^j|-j Q u>"Uu>;B/KLAFAu$Ir!DF]tZmuPFk!\>729X[B,˙$ҜUYsmςvK/}+~25En2[SFjBŔ$N(@Uuhng[#NV)>Z/ph /OUTKX"3m\5LYBq"5"Gl5r-3%\4dQ.wgm]lzxDB ?{WYd m*G8Kh~Xig/jb:3kof U Ȅ #FG'Jcr y%Hif+țZfd]MKPoϬ{|,1"c-Hl‚HQB֬s\nVZ)Nz^mwfA4Uh2eʔj*r *!(dy.S;t0lv=TeAhLVj%W*$C$V%'kg[9ܖb|Td׌o :-- A.k']HŅMT$eTtrAEVJ}8ǝ3vl:|g1 i&7oSa/nt$چ fMx& 76F<&~w֘il|7)v1ߡ1O*qq$i0pуaSs0>87)3hɐ3EUBOEOMOF jkk.֪k j25}4gHڞ* uOLkCQr+ 1E/gbur9~`N+ h$b3{QZb_ՠoGu@vV',ϻyǣw;OnN!޿{pE^n9,8PZR|*ʕ1+bȾLmד3fg2ԄMwv 9*HF*+-YB6r^a"5j MC TM9@nkrgfFS5"pCh  qpԁ E?J$P*שvݡ$*/wEە]{w,$ݲ>l=<JS.:HcN!FDP|칂BM̝)6Ub8]кw忀܅%Nż0ڃ*e]Б=Qk.cιfBU 3]C )EmKPaF;G`ɑYԛ8[PF.Bˬ-ם"'R,f_N6Ajnݼ,:6R|[W;\7$o=/b]QUmr59MQ&e\0|LRiFB.pN{r]:bU1*gL.ɹ9j%[: U@72vvdKy,B^AO,߄bSJn.K߷yO/)"G66m 9WKVd)[CBqZ ]ӕdZ2*#jl-WGvFJHD)U%v8#v<=k宠v7/jΨ'Ԟ<؛i. *kH*EcJQRU,_PȈ?I\YU%+Vc9(nI,! "]js5bTۄuv<-! w}:#"M8!D VcUcqN؄`kc}іa]%JXU1q:#USWjoL(Iૄ:3am8[&[,.bI# D:ꌈĹ6ڥ\g7-/.θh&\pq!K`-FԽ5 ;o>b״yZvr*3rմ/]NviҚ+r*OME`\ \5i:\5)•V^#AUƻjjpդ\'z=pEV=&0O0u/p%c&%Wڵj-^jvê< \Hʼ7HϿp0;],O2r.ZR-jY /s0O,QήK/0-ҒyRVO0 aڐ?-G\Vk?OXCJ5䧓Vj2qD/'>4~w9\/K qJsi u+@q)k$5s~8<:;VCZZ) բ~_7>P@Z!9qˢ`&3{3׊ NsmϵMJ28qe?cM\cMJ?ep^#\9`MAQݔ{{G?p|9X\1PE"T!iSԂ:iWB\OQ`G!o7`!@2MKb_NO.ai]F371si_+?KOatIK>ͮ$-fGrHJ9S8nFfyݏ#~52?s^HDExck>y23 0/:P b?<8yjeC\6(^7=.}F#Hn*)hRRpl)%EA#2X0{ٚN!e_HuK"*ss UzAP'8-fer!M\M-f1A&? gÈ̌;;n4BW{e_8{ʒd0A1\ke EQȢra "Fj o?bkB6oRpX V@K4 FU]M-~ n Թ)Jb͡!j W}} %otL $B"jr6Pޠ*^g_z8Q/qnWC.~?;cמ*\4}lD?u=;$)~pԽ-!'f )f |wКQ_wXF\uZs"P@1 `t 8nvZsS^;CNi׸n?pxladb[º{y.e܄6Ll!gdkj .тjYtdX|K[ecơaP eHG 2j*&ΖzCRyy-i$9JL2ڐMkV÷sm9@LX;ކFu\Z1|۬ز.eAL9qɂm#kevnb6g%/_u`HǟO5@=_2 0-ltMWv Aeu$t b%!M&{{W8*T۾1;ۖ2Vƺ+@{2(N<P&V,%/x::x+`PM,ߊ!'miCJO>'* cA4$WEX|LW "ndH} >v4f69>;._d\uoɰ1cYAttdUSlò882ZfE/u!}l'5cT`^6r~d=%xF<+V`mނ`%R3֠0׷o˱V;J Ѭ|*1֖հDS|_\*_e@o+ߚz1:\ƾaȻYd5J2?߈D_fW Ȭ?{WF켏q؞{O !hTY_A-OGgh5CWu6-)/]͈f<bLa4n1fbǣrm f[0ت`[]ՎZFW.\VPRzF44FW$u95A>0'yxeR(-Vvիig}E&q9%TœrN'EwJɷTjhk^`4|$$?O??tϧ\ӏx Be]]Xv:?ߴQMKhڲ^L>Cho-̮ǛBqs6˝rn3UG&6g+t#pVqIFo@{ v](?DsR'*~U~JIcc$#u)aRf1ʘFfRaȌ*fzOGHش1A { f{GU6yb,ٴ-NLS 0H5xtHqy`Z\b6aHB4Z G!A%olT`!!lC(CZX{?z,GV0E`)$7he@gx,1! jyԚ )u  3:Zޣ.$X{\B?$Mպ6 h>k1gInDټ i;Mtb ˉٷZ(B"ڔʸKnŽU:F],05Ӫ*{Oܝ=GekP- K?G鴞)d~7&Ѣ/DѢvn=;ɎܞȽe[m 9̚?5lu%Q 5Rب q`ń0'ˈ=q!|CWL32e*E*_l{DLKj%Mu5:Z~GllD2~@J>Uh( AHC.uFSG`KN\̞?A+@ve6 %$9y%a{)2KJXQb2[/0ȅ7"(I.3 |Y;{FĐvS:SHڔi/ vD|Z_~z1w#IJƢ%ve[Fanϑe(e2*eD*fNKAcsW5CpNHF&Xf68 H\r=IY( >(n JU`V4)grrsBpJat$wȵ""CǼZw9;íWuJL>0>+m*;awJHE[51W6L&x"!T,'SwԤJ0 iYه$x2;=tBOAjnH#mҕLTY'*:#Y=.s 9?wx_y<_ޛܣ|u];̸]]2?򚐩w~2y4r}ͯ͡Wﯮ?^ xWT!Nh6_"MWwp΃G*}ck'S[}B~CT6M/nn[ iFEfϹן~x3ֱ3·\JNyfݚ6qX6`E3ldɾ۸t F2kNF䪓9ԚW+nj|ʰʁJ \UrdTԂ9vTz5++@_ľpU 9erJ9Ԃ:vTp* Ux̠uꕱu ]_L/vCg[d(^aYEJA ;Fe$(<̙k)HeL@ PYYerAy&2gRPSl y68% MV俩~{9](po˸'T%ߢH 1t")<( *x>™_fEqN<2b{[S;7HERw@"h ""ϻgF0 dHCtCy8g#j>.eG^Zr&~QB~VTfݩ6ցnC* ÌbIl<N0 nae/x';PߓQ BT딱$T@g:2O9y4<.g25sFc2 -M33xm+R*$f3,4#g˼s1M?>Ѥ[X‹|\^Lofvχݼ8]D3VpD^]ɦR 2|+5J"3PPXE&@huɬ&ldĈA 6ah Y@@HlʄQH%2{RvB&P2&cj*8" dt5Շ)kfQ;@ }n|7Nn񂯓^d' %:F,rb;)[<. E@.qJ+:m,;*X gW๾# Bψ<4JR&#S<U:;JAt9hyeGF 8^9N (`,NZ[E9=xL1% mQIQ۰HևŀnP2/RZ[vq3ܰH_jU>otf80x+tFeY cR>NOwZu[.tF&iѦMLSgf$,uGg(Z,k7ͮ|p_dۋ Q{Z+X4<0I5^\-K Тf^owc!}U yRXy2T"Z_~~^ b~s_k)+/\֥8 5eTT#yXMƢ"e ^Tλ)nDO*xFGϼK\ Th_X/ EL5@} ^c`8W`W/m :-ZoQ r(Qz1ƐTzfEe(1xrfl,d8u;azHYB fQkiO4IK6JA{0mv2<u%DV_J2,$ՈFhs j3r6 *DB-t[Sm-H|Xf|rV,֧VY/x[%k~~T ^t]B+3P(6`Y @Ȭ$eXh.Dȥ4EFV KUuƄ3PB&rQ@K1(;tҚPZ#c3r6#c;]6㌝% WXxHOjۜibi]Nޠ //oM6(M' (b,5R$g>$J"F] C5BVBǨ# m;(xF*"6FflFx}LcAfܱ+jƨG5 @ E2%S%P4fVdXlתl%US<,AH&2k,&`Har82Fs,lT{~eQX슈1"GD|>Ӫ#!t&R lʶAd=D*)6G)j4>FK_؄*gb4IBɖtM@T4FflFď:2.NǴٌKvE툋#.>&j yR@6:FZ[ѓ`(4+B,(G\<. 6]!솇 AQ@~ܸx'8wp_S*W5+ꓩV5x*n,W*y+m͟7nέcCBJQ2E(y (l aCz+k<2F] w,;2:ߊn<5੿_fMJbz:"m(-2EH(f%C,J ~"$[ WCҚ|$^(2^Jl+@KF5Z4#g L#ʅfuk/)p^הV磞U3KyE5I_66sM ѯOwTUGy.JEhTL ( Tb6&E*K WQrIѕph&,Ha&^c0TMӅuJ%)+Dd= xqB2#DScu@bWH^h*M̓G\$^Ľn,gRzegft q6+&3@Q[Q,!ǢXK%Y;ԼQZo@N0 cձXE"BXF\RhNf :'ZLJcb1I/wԫQFԒd`;MiRKEe1$el*8*S31u}'EUv^V19emhW:X\1X@'R"'=J['zz<.X؄o:,j( 5j C6ESCk!ˠk!X~HaPavRƎ*B3U:'_5nMš 7&'88.t6tH}^7KVkesg0={|ٻFrcW$yhd-l]nd0(l%% ~%[-vۖ=\"U_5ylgiQ'rP): Fit_T->7{f9]?y! ==%!MɃlC@8lT$1B,MhBf_=t$ƑgU<@4E *ܚ=18ͮ՗8{ҽ!Rv;ڑ'&:Ko &p/O l>]Aw:L2x2tx0q/J\cQ{v[3Tbd1T&HŐq^%X2!݌Rk1%NmE,5%<:t^J 4, -ž}zg<,!H])RtB) g)!0 /,nDvSePqC$}W^b څP6upbN_(G -93]O9lF RĬ#A߹Kvb4ӫqk֘_z:bXP&;<,Su Cz9l;5vB!5>Toz=}x+v8B[T́/d$\e-m㥼i/c`Z}q"Hx)1 K-ֻ^i86 ChJ V[ĨIZFGwadd~UYՔX!Z+I;WZ13ϭZȨ8ѳQ7qV'_ÖgL}2jsSuP|*|dSw|TIEYɒHfuw Vs qy/PIPJ$xfQĆ@ll̠GVUҧ 5jea=ĝ5Ai s7,E-hx~ $Y,1!&sAg'ҮNbfFou14tvuH7"EBn;{76Bɼڽs^G/ϟnbYDIۜei-MV1YY(ɉ˞G I):{L rLRK:vZ!t0iQf2MWVd2]gv}}>wfݮSv֧!l_'wͲ݊i=NgWwk*CAq(DdT (dCSo5DI#Y$EVBh%3B4Y+<&`d5S8HXzBFm]pP't8A>Rּ:"PIU3Ѐ,E#s7q[9Wސsvzkbn1N4/ngC%)dziS”1"acc@+,F9f 7%f07qocLl2}a<,YE?ڍ1 0k!d|k1pЁta o,HxqQ޿٬3] =\iWȝM~І7gXKĠ?!MO*nwWO9Q#wf[!ldWkxo|]n[{ԼVr|(-%Ϝ~׾GQ~#*yz,c55_ݗ#~mJs*_2T%9C{Uts$fX,{".I~ 6 < u '>8=V>&j8x}Qg@ Iv'R%Dε&uC.bV.E6V&0EY%b?<\/uۼl\{/zKk I Dh/̺`336:n!iƼTe#4z/2mAXZ栬;B(B#xBZH]VUnq{8cr*źg2XK09A3H s2N`ge3jx4$#aMTbGӔ7~hZiIYG *tVBw!uqY<; Zo#n( &~~;Rwպ _|/87+>6WL"rfeQr:O 60s)T Cw3wM.')x%[œL}P&kZmw d[#%tGWGmGi_gXBE3+ccb`M]\g^h@"C2xV|\O)p0Nt58v#Y˟3~ݵ⣆dT.PGaRm]3iv\YqBGH:t ڙFH?Ο>x1;Y|am:u=-N8'p=k"j/f-ʇw+ƺZZ͖iW3Z,Bi_nT|Gf ǃ0<;E__~ą/>{: HeB!>amJ1|yijoٴtj-׋ߧ]g;ۍvi+\ilGa@.C`SGD⏠뺉 j&98+[[iv7K0`S؟C\R2hW94֟/$?r:~{O*`cJYbŅK4̤33.*V#˝ih!y?4gu49)F{hV 1KQX>+6mAYEV}Ki]=rv[56'6 #loȺc&$zӫL[qN'+C<tiQMkR*fv~6=aA $xdJ`? X \ 16{%: wzj=tHq 1ږfJvd`T&&r6)dO:+rY9:`=<ɳkKiq&GK$^V /^f`"Jh+1A3*# H4ʧh_xiSDP6 ǜwR^qc@+,Fyogxٛ8{?_"acI#1_,r}7~,Rt8G,_%َ:2eNHbj6aU^F TY\ҧ25IY6jOzvqYյLndmQ]AEfҕGxlT7|hţ;?-Go&yBv̖?qlyd#jOv~Env+Fn^0Qw pHM^(ăC"WU~>uƜ\1(ut.Z0 1)O) Dsg+Hi EoR0 X|m kVT`SI*D# 4UVٮ}U2"*!Tx%7^w>{SyjQ[7I;z)WT86JQ]^KʭU^8rn*z+_k~B٫-6EtZAM\:%jsN 'CmJxt!~vNh\S#F W՗~;; 70|P}c_;Rq@elIjCNNwZM\f{:p>;;۷޵4L0 B 'lR6R6X%ESwPJ(PQgkA  BeTު$]6 BI3vFVǐ~S:Cޔw ^i죱9Z1k<C_K{=":iȌ/\@9D('RF=9KvV Y1!^Cˮ{"Q(ɒE(Q:{`)Q[!Ps )gugD Vr%FiuJR:[sq[z|0"jvR=N0Gzyί4-J9b).Q{08du&-zL+=3![CO,eBKU?e% [b6!OzHguݟ<8 +8~axN(ŦXDAFt[mrLN"Y=z6+J1$ͤα7^XT-dvBq6I)jmٝ ˰gī=^L`øQgU*MF %oqIٽN((Z"CjH-PjiVȗq#ӼY{tL<>eNfס~/f{PXa3[^^zw:CZ_"|u3f[񗓚/nFos F~ܸ̍a_W#ȣ1l~g"/g}`Zô<МKMſ凫>>݁૓=<8xrzeEZ8>=D1tQ=)rRz:Ôa0;;M7 F=R'5Ct ;$CdC;@\qh<}x)z7ek8Me!6?oGqQuNP頖Ѻ#[TLyc1/q<a5yٟd 6i[zJX/_KoST~PMlINCJ(]02+3I(Ⴟ^R"#+L*FLMZ$,\|T9Pv@@VZ4 Tkdlvdl4f+c',dEI[z36jn4y{"?>]8;}>;]\qĆ ($ aDc+ (sHH>$JԶvOC5BVBǨ-#ߝm;|Q <# 5FflG8;LCAfcW֍Q[O=iہH(E2!ҥ K>۫hYa}F:vZY[Q5b,YmY3d14!cM&7Eʥ|1?1F5ah;7Q]#"L8!R:V"II[j.m!dەTU*&,Vg%Ank{'[#Z&T5Ȓ-iJH4FflG_ud\\l슋1. '\o j"L}KASѓ`(4+B,)'\. 6[!6{`¶$E-=r7UeE8"n~|G,G?>(ѭ|-[/>5 R(2NwlxH):&A3HPxm.HPPh$l8@vp(&/ ):WʅU ZLD0(Q,Bl [@TNY@*CF˔(HF&5*;MƊp3vF]-j E_0iעq4f~ < W__ޮ ߢnCT_VA{vZ"=&@ EN^ga5 4V)C!/N6&m22IWր@GfLڐͤGcF/K"632kdÑCA+~EkަlTK*BAUN¤%-睕MtkPy<\ 0ߌIƗ<:5HD)W+@(b*@r/nqs&;$>[H`ojXy}R|UcTdUfbIHj~MV 2BCVu npMq`M n݁b~QGƠ3.(Jˠ~]:w@_W?o4]|&]iO~Ӹtx¦K3+4s匔{97/ع['ݟV5oMĸ)/yct||;-Zzko=6~8Zcy$ú`mE-e;W|?-˶]OtTxmƣލoYϮG+^^<?iw=삊M2vNX[]*tV ky*: Nw:fFO:x򓵯SK I8 `0hM!alhjejbLWAsj*݂]Jw?=wB}<_A9>JmnB)4^$+D@JC]f4d I+]GA1/QRZ7%b1;eʤ:g"@I`|w?ړ) BGJI.転L Uby3;1vj:u;凣~/[-gpa:&vP^h=#=r-@lٻ6ndWI펄EU~*Msv˺T4$,;"JP8'ql f_7݁J"B8K{E% F$hb#PkLM!wq,~Hw]?\@q>C܏PY bʼA?*o$ײ߽ӴX}7UC} 7q /<8EiX`Yyf .r* ; Yzo* -b54|v VΡыxb}wmLYIq0zg==C "O$?!sG_|pb8OFdQ\`#KJ߂IqGLKR۷=nIbt 6`Ni]C8q2'J!Wx∰vq_{Z.zً|̒(xS *hcϝW{a4?}D1|.MR*AU?=~0. ݔ)'Xׯ,r,gu/s:u ̦|X rQ{˖ן޾QK/VfW}ie09 DN>_n)Jk3VEr5k3/6׵Jn:] uC2 `L"2컸TN\FqeT/Bre"2TΓ*n'ĩ| hrᝳ Ģs|T]~xo&ュep4`MƾXp38Ƭbcؘ /A댡Íia ;Ǎ4ᣱ&A  ڨ< |+=wq0 x߻(5z'x*Pvb Le8]3J>J^?9*1 v$!P([v۷(nQ:%q*W LJjŒd$z#lDOs1"[RRƹɩD:ik*rYFcBbkil"Vk<~ڧYL8/GșWsp/\~Rp/n0.]`+XlޤX98IML'R)॥ԗ1]P:o鼟yvEK(Ђ1|Ja))#tTy0g(3 S l.H}d)ͨl Fe>6:gK>Wxqor"[OmG1w1-5ڂ:";TYU^rJCH,SF.k{/9QCT=iORpA`rv6#lX4(m^je3xC-@rR!Db:Q" XJ{Zfi-/_I[&fS5ȧZxOWȕ3+O~6#$ )cf/0.u=ܺ^yjGmV[+_=e-g}Ə|ƗZ`{s`܎yTi1Wptp9o{ ^Q`ΚϚ[m!8wlnBgwmJ2|bGmTrcA+#Ʀ DhYJBWs>jnNjaYF2.J&wF[Bt\'h5QB& E4IH7.3z+P @`ZZ6L6T8 ZXyyhy ➺'zpLnWTtf] ZGsԾU.R %'L{P)¥OB'''+/WV$rӄWT N٢l$ER$"ѶHn[Y;7n@/wʳfϿ=.#!_qja}Bk.|d9MC@IR.{(oѕ IO~yr=]]u"ٚ_gR8{dv׏#KQ@F]s8<.}hB>(N g`=uU^j ,'(#o8!p6a~:? |v~q98W$Y™};{$:99_^/ȌJ( }ДlRBsU n$TӴu1iOE𯵻']/?8]gKFܲťګRw~1[novQV~=/^j>} BzZ\eS7K,q#F̳5MeobpK{eduA60VMP6y5"p]K?_}WşzK};\^ i[>{Ykd~bg8r^( L?@zN(l4!Sm0sD_P"+#Z &+~Tq 9cj $Owg/g~~F9;_ޡ.BdDVy. ܈<刺ߵmuͺFѵ&mz:+rC/.c-B b&p!rpDm6&XXc AJ%Ybg(1oW>{8gMc?$gQ$<P*Fb Uŏ]I< mju@A% Mx6fŌbKU:f9%bS_ Ku2t"YtUKd4=ҨZyL0Q$mN/hњx;q!H CH!H#Q A^r9f`=o i¢mpJsR6k\{dq-^4 V(|*(+'9,p)X:$)9gI d(@UuNYs[۹%aNn+F:S"Ԉ>Z/[l4A3ET!2c@P)M!pf)K(22P#~M)m-*$RP[>k5yힻf RҤ V +(AvwpnXTG\!G @[SA15cF0."' 0TPc"W4rc!BVkP+vLr=S4TSZROLT0AqBbBFb'8Z{uXO|'e`SdҤ28AR3J$N)A1- ffw=Җ,3?e" U&ܠA ^@$q:BD(hR=dM<>8乻ö;kt^b ߜEuy 9dlBz&J5A)Hm(<Qјk'ATZgL)%jWz%0P.& m[+gqx+))uBgs*A-hU%TH9/B+4crvʤQlZIvu%/6$᝜͗d^/\U{T9vhGHJvOnF.Sɵ|ZFal'ZH`{3NvZb\ 5j=7݊ø-kjGMDJι/ MJFrs Lrr+w3KODo='i{F&VfV7Zڞ|jn;^mA/ ϏZx(';*n.*.jQQ}'}mOcT >LWivf-u ݮۭArwׂ}n }Pv R9R-9ƿ [ /-\YEJ!h* L8.h`C;pJmڽj6`s 3`w0}h4$104R!X=0ayId|xԖ eV1'8ÁHT ƹDl&fvw~|CAixnu^Ny<;h>VKn\fs}Q׈U5<(1%YI05F Bz2hQԑN"Dȼok;3FhHzH$pZSM `-aA\WI?kn(WvbP՗T_ʜy؋/.e㉖+*9?+)dIa<ɹ{tL吤d=̥O*H@l'x<"vQt{< mPGJւLZ'R$m R&J;nMN"y2\w_'K7t\J*DNRRkIf1_dnon尃/Egr$[-Yv,Kw>$N7M٬Uz)D+N,n PYqܘ}PJ\_{YP0}ˬwE5K(ٝRòAd%y=L`:g8 ZUΕL89E"1z3c |ry?w )mAĬIG `ZBʔ(l˄h$=v;?WUtR u^@6/Z/^7f ǓG/[ s ~lm3nֹnb cvon>Ɠ6@u?_^a'Qz&ϱ4n ?rC~?ʓUxo xE:<;9tmr8sxe\'I0e1?C90>?7Z׷~9{Vu:ݮ5 q. |N zmO=z-V`dn 9 z$~\-*f:wMy^4?0S=1 :Pԩw0{f }mt3]luj]'&)^2<5kky{/6tFmvXN|iEQW#-n;uM+s!`j VnqӓyJjs+H>%09xC A -$jcEhD`ըb G~{^8*\?wh5cཆG9)XD eQ";@2^ف14J#]mo옫SWթYQ.=V8A'輧((OQ}l}^i|V!\ ajeU2vA@f0LW د[ wGZJ ڄQ L$!{5"1s_ɛMoK4|t:[%Ep+ VEoS,Y1gT|ĺeY2e'dB htRɤ1P#D̢9;/ҊԮ;vEmYڃxn>KH!y[KzHR#- a`| $6)ҐI2$6%šov'a=!eMHJ5_Nd5r6a/, 0]QUFD5  g)22F!_׆tWRIWeҋM/s N,g5*eR!e2J3D+n$S&-p:U?Cԁpq.2.\|)[N H扙ԉyǝS\ CF QHbp)p/xXmw슇2XB lCP<Ы=#7ZnNpW~tҪrw?!TpL XQe p& N rXu+RABRh>)4* n|P@3,ChKI1~ +F-|>V/-rM}HhJ$o3K:|B1x A9h< UĻ{W7&7&_.tYo Kޑb|@pf L!qtHQ6VrT5' N*(::*~ rkB 7UӍużLx{$uN;Q꘼6AI)8Jšd,%.Iii];(὾3 T?+Et\Cfyyߗݲv[DUm'+=Ϥjۋ,[=3҈8i\PY&Zy *"^Iʶjlv~5-1m>~vwkJjBi&$^ (',z!0^w#f=J Āc8 ƓVMW be6{DXWS2XU FH(36h2XT-U|* N襠/tY f^+AH} Xt!M[IؤWv(=hNB?O;jb`dU7DN_,G h)W86#01+lNF;U1ExO<\(׊`C#e 3ɈNs碠 q&cfci^{K=vhC:iX1 e`եy[PlFFG%Й慌v_8j~}4&J?WOn"<~s:=Ϻ~JGzݰ܏JX1a $X#Axgx?[kѫ%EgX%)=(p1o=ġ]ӻƼmW]?狼\;Ezw{Hp ը+ m4բh\{`oG b:W4臟͝H2ވ?F'/GG7t/I-.˓Q'9饒ZOC;Zuk){B0rq[UJ3R  _1\r>oP+{WH \=CR9$"8骐-P w*T%\*iW4 :~ݵt[im0'F?(;!)&H}1JWLџËd<7Mc;!EJW\k3at/{yk鸹evME ߗvHĊW%m%]q4YXr~ډSr1Nt|8xP';~ONϘ̏LFۯDT-ct%4[Q +Im^gl8co jRӛ[eDk*p$2 LDB|$2jTTn顤 RCiKbyʸ)ehw8rSr!PԠHuWt=ȵ`nF R *T5 ѳݍ`t[և#7r܌ b}peVӟOgot#yt'Ӳ-eKQl~zF{tp|}HB$"q6h!)(sƀ6 E)Y/+1f8/i=&KdپupPw^Dp!PB *ԚpJˆT B z6%eSi`/O߯d[DGlG7GIK9>Fs%w4(cPb1(*dJ⃮kgy=茦5"e<\.œ?+=?=#Wۭۋj<?xk &?FZMj׸)mAĬ1hF:@1ik-w!Ke2!=8~2}ۑ{g{+Q-WJjouJjy&~ÿx94f騋(/\ݿۖ}1w8څ/խXrg@97 Cxy(08e(0$6 CxP<`( rPؒ=Y;1Vyh,() o3a':ҐHmj[2^gV?[X0AB,FIQq#DIj>1NdǺ¸QYBL4*f>me.#K) -E}f^qEe@5r6 V"A)CQj~Xl޵5q뿂KT-5#;;y]p ::uíiX 5M7z/.7.ˊnܤZXY#KKt.>=3jМ@2D'RȣRFe:%DG*4OAp}ة?ū֟PBif:\R.n˃Xky( [gbfx6rͶoo/An[qkdrB;rQxC~v- ֭w`V1C$G[ 'm]_g:43?n= z%/%xT -cv^&\*%KVX$,%KbɒX$,%KbɒX$,%ey_RROVӡj>6 PɿDZ +%nx<%nɃ[<%nɃ[yq(PвPΘvD@AG⵪(웕eV$wảّ!&'muuթ`F}7tCOj<^qQ0pĕTrcA+#C}B>p!)YR3eI*b\;r:/|gi}7{QhD*m i3&(f%DhpdA⹈& Ia*)mA'm( !ڛFg P@q%@ :`8@+  "L$\{g azae{JU.R %'L3; (wV'zTLS1fAUf Q9i+F?\T#6a ")ghj[[Aj/s9u)IR n 7s3!l7Al\-Lhebä7\Bk@M&M}`w;0r}}0?\fOjSnOwgq">VWqF 9E* ;"gZqNy8s1\׳ l5}6(NJz!hMX\77~量ӄ*wW󷑋ɝ߿ޕtpd6Do,ӏ*Q[r= ;W_5e) ؼk j[}5ųuhX?x{3~ni6b,.ݗ0 r%G.7ջ 7 B޴7 MmaCˋ= bQ8yN>zxg?9;:֖j:1WՒo*!kƑٟ톱g_awFחaZEKV9 $n.꿌d7cԪ퍚;+UnTJ?lBO}/?~|?ŽO8/re[DM^ WϻZ^ë5彲./+EUeUA 0 ]貍F&w%q%xdsc`$HiE=EϷ1`0ފoeQ8bߝqa*y=t {t9\Psp3pԄG H*/J I9.GuoRRƹt[$)N$J"^@=ƘgZiS5giU =6.mx5ƕyRXb#qbY  G>[JD1aȧ^Z/ӰZ:%FUW7Jv Y'^}أQͤ?hj }/̀МW ji4N=OQ`~4SwLwN>'Ԇkޟ?ѳzeoߪZuN Ęh{u0s,>XKs_<|cܣ;޽mㆃ%Ksl.N <**,Gw[r=|͑%\( $VʓÜM'Π=$OR33 5tuKCl-qm ШtA^7^p; &1j]!(_sk#(Ok8jJ=Ji!(}knf_7U|Ё545̕ !J;ANJZXçj 'ffΡ=iORpA`-#fM(!*ۭ5坘!j %ՉioҞΚF7?,<:а_]d9W_֛h<$۷P|KoO:eU]g٧얪U&u~)-NȍMgu_]lm:a3I~+,]?қT:/7f]!d5]WixԼ(򩱲CK%7L}e#Vq7_R4"#'p]972WeeN?N;a/JLAFA9ܺT_O\$Hk9qf".Rdau6Z:(\29X[B,˙wg,ҜVt5gu ςvK?AelOz\_QLb+Om ISrΒ4 RVyQj"I,BIp׊QB4%qG M3j CdƀR@u4;f)K(.dFd&g2Z[U,I#I@<&nyrYsvyhD6,T||*$XHh %M:`\N KC řwmNEsYSE>'B[k2BA0."' 0TPc"W4rc!5QVgP+LINIOr=S4TSZROnJ#w# 9Eqt8NI5L!2 41I)&e$)qYg9IRbZ4@/֝Gۖ9Xa)OE:["&|;G.DB/d3Im"7hй( I\.(~!Q"Je\^VNA::[!۱tؗ5A{"gN[p2 H(&G 6a-Q#\꜀˹./GÆE$Ǜ40/n2?/2L2-Dg-k*7AFs3a;7&l6ք]q6'WRY T#6p`IRUGݖύecӌ!:ܭF msxu=od9lyNjKu\OdVW+gAgo϶lbtwʍozS+A%SPIy>那k'ATYgL%KJHa$yE}(F(F)F(F 0bo%:%N(l0MdKBS!$Rtx4#7g ]] 6ܥ|90)rؖkHЃϗH|[7>8@Hf٠8I%ZeHD*ι 1RLB?R L]^v!Ϳtt߯ۦٚf+nV`T}0~bn7ۂR=!Al<@"5O`[=uJA $Z:7 [‚S`VQJ*hz`REuC-BQ7=LƕócEƠ#.cFC"Q9SSNI#kN)q(I&Q)ML rT*g8P 8Hx]p#ޔNľ!4A߷P"N 4f6S^[m7ڢO%Y۹DݕտC6zQcΒ(EL4ZrX9 *(硊gt=EyT0nP'QpZR k))MB`P#؇KXF:ܦP>]]9dFMPr R'`.r,KQ+X]pO 냷CV}~҅jE$ֲp.vtuC'-2.eb>7Mns~6s4k$^4 qRbjj_q2*gčt%hI:x&4dy 0Wa2= ^Ԧ)tiؘ#h=9΋_ _.C6z&ؒq.Qpy\Zm&?l6Rl5\Rl y狂Vhxl+}f}R7䗑/6s"QWofp5}5>u%D*|`]TɱR$CY߬p4761}f *2$oϊ?3d/,(T[z|!]F @5RL.M b=[d/^xScp腿iס9L9w }9PQ~nIF"ՠ/M#Qo(9z,ߡ. s7+A_Ulot +O6[oR5kÒ߹2WY@RlrB9 EYQЁ^ $(mxoݫBRpB4/k"<+&d'#WA=/.]ʑn>Tks{noTnUSvJ}寧̾w rg7L=Py ͊-fGZxX)sݤU=TU~s6LP-ρAߎy+IdMK2j'ܶA]x0t_&$pH:ܫ[JxyY>F=lN#8 (4b4:H}=$y_T;ŐaEGy' @ώ:aq*FƎx#%t:" \m.b(<D'{a|i ]&2]ŕ RRhm,exJQK: i !! 7rGQX90#Aӱ< 0uE";]%"X"DΨcWy(;;Viރr)ȭgq 0"4я :tI` 9{ؑcTBV (0/:^Pwĥ0H? s#e6Hh&+ I/=9vڽpcf]mv jn;}F $EhMjzh_=RRԙR^ٺ<R/4 [vyѸ\/5䥖a2mWݞ4*0{V&\Ki fiɰF릛M2aD{0=^&;Oc @㣡M. ONωe2^FHjY%/ժ> n ߇?CJzV5[x2֩Sl/Eg@7aԙQk»޳7aI\? qz}BM |j;~i[ɠeVWfnV KEj;%.,]^=}nڗn9nPW}7sœ e6 iuDɲD6:xcl4 !<}(}2FJg.aWpVRI3YcbG91=pKOn'^]FF, WrH# be}0"ʌ0+CʘZVy0B%xXS á|,T-wʏ6? w YzZ:=h=i ӻrU|Y)zPcZȞ&Ӥٯ`R;\0)2yݮgkx <<;*  )=wJ7I'RI` IQ&xM"[4:rƥ&LI8FҎY | \KдG~\gtB}=ޟ0l05EկbC]zYNҰ,w k| I5I!ET.P&y#5V䫙^GRU-<* tD`CYu)TD2'Tfi%1#Ay C1nGͅYS3kGI9S-H:4qI@iXgABAh8AGݑ?ԆQV!;^ӘX2", bHuv)GFꖑv@@8o4gP\sK q\"R aVn`Ia@_l%f _%t^˥~ϯSt-zbŘL)l; FHDNbNtd:!xuWUÀP&|\5ztP| 黋|2|Kf,|>;?Lr3bnkC-& j`f_eItvv:.?a F9@ĵj] Mh 6$y {eJ?7YNqY-Ջ]r sWf8Ad54՞_US7L,}Lvpp[Otg?08MG^A60V@6yY#ﻹpH]sJϋ?};.~TyQ:Z=k$z_)`8UV@4`řc5Mg܌KTE:;'+o*R/Ǘ$w_}uۋŷ]`.^}ݛoa/,-4*{]׫w-XW]CuT]K娫 o/r oJQY(!vS¯iI FM<,H?"HeUgKTocQnh;Q<| v/ƾПAF=GJQ`Y?1GwE\PX &8bD8Jy$, & >"J[$XJ%Psd1RE8yM@FBú+{VŲ8:TPEA |:ӂz\Wr1{/ܿ#?>5p6cւMR$]g!g'ȉAMtΦ{v=Vd~cϠsAQkM6h@k X%O9yKQB (+++ռt4?8̿">M^Xݴ{(1&&J>mB\"d jp\tQdO$SRZ8JvVIg`BA˃O0+CpN"$d0YR" 4]PËѴ8asqkNWiɞ|:ߏ{;"oO4%ZFz+MLRhL\quV&-?O๼Z zFgB yPh|^e!XBe!;e% j:{b/ Tw=y{=WFRl `&{LIͲ;Ib% n'MZYQ t{cZG ǡ:C*udww !K-@aCL{_6aI'_y+V9x'mJf UޠJO4A{7!7"GiI%J"e>\q u^"Bވ= o؇{j< ScĬ]R3boas]Oh?'60.s?U:?n!!>w 稸۽ܣ{\:{ܧ{{} iuy0_]~اD{ttEeNi(`Y0;X_w-4K:rN>zv ]˓O⤲YajyBsm ߞI{?)_@q9H_=781үY~ޫ1vJgc;ҹTuklDnʣQrӇ=/Fhd3yҏϓݟ/p=_]/Ougx6REaV+qj笭gqL%p_kQ#ڪ0nr9(}R{ܼ %c͔mIz:^ n}n%N]ѶRQYȬ5DAk⧚zXr5c@vt<ؤ"F?Xod fO*|ϞcdP7JO;U/V}RtUAq Ь&) O"|W:l$)'+ gx:7[n n&M@?q8*cM}| Cǚomjc$/cDnHk~ nhqtنEOQp#1h_ēP#[LN]Z~!?.Цf\LGp|X1+hXA.辳wnXnӍ|48s/ _Xb|;VpGg&gCk5q˾":_h; wպvu|k} bAzZp?t}xrYgȓ'u)awid'!a "D1TL(_l)q`B0-7\Y=e;4Kq$GlE,=M.$D@CYJY\R֡ ^'IBz&W50Jr4ӍKrlnQZQjAD6# S82f˻ily,w#m3LIc(,5FiCM59$tҔж'"kt;jbN!F/&!9P%;Wբ'Ɠ}L˰(6jY~A/TYN+V:$KA1 ̹d-|݇a (>)Ng#LA&HG6(細.56f6ufrwuy]sc"\}acY(l'o}/}HquF{:Ʌurh/˴/A;Gl5"'u:aDc mȹBقGIb2rh~ 5 YQ*YaSY#Ap±8_=# HZSh%nĎb j7ӎCQ[5Fm5>̀BBY)C¥ [>ǫ(ّaF8qJZ-(a1C6i3d3!cM&7I%ˎ#cb.ja3qvao*0 "6ӏCQ7FD=!℈OmZ% 9bTr_eu0ȱ+G 2&,Ve)Fj{'FJ!ʨe8+'&KV"6JFPƈL݈UG1u6ӒCq6E3ℋO 7:*yRH&{{-)  IŎpAJ> _iǡxRgm9rjy4>|$)uе^ُllMpWRuNiɯ:bq7@aR'8dr]kPGT6Jk*HAN,d-9fSZC1x!9㹐%G&:FLYqX lI:;l"+T9yb\fVZ͏Ia^;cdF!F$i@QRF4")Tr:T6Y<G8*nf:qGƔ{ /H EV{okƢ ď8cyR[aٮdǓHE)B ,_a4Xaq@IQ׳j` dTRZE d;4Ph|YqzY'8ܤĿLat4w$j&!aJ6yd}dK)rθ3lUd( .l[Zx6]hg_N[읚xP!=+w֞WʑT6`JAm]0k{,%Y4R$Ӗ˚D A9'D DEEB-Sg [Sq6{fbq\lMWJݮ}@:zzK% >Rt|7P|LCLQU'h8yGtظ7]EV^;tq?qA\-X|`._}sy_w\nuGՕXh~-_~7ݽr|ڱW~7]YCq{$Ƚݐxw/>0w~u҇G@>_5>z~_[#8ܚhć')0|im n)c*4tZuK, ̩Q;j4Z kD'`TSA, co>60GUeos$Bbd4\Y4X/N>ziȗ-`GTc&i_R"WBʠD "eG{NHGl*O&Bm  }^,d";+mٙ@Y{كz~$/&ȡo"d/*`A{vJK lQRWf3X!<DŵG1)SP"ceߚVcpE YG ы=քFٳCdCAI~E)eTYP ! PTbd`Ƀoi(m tClm46`ʽ/)etkfW+I%%PL7; 踤<خz_icc-!y-B"[-xt^%Fx$0J2uib{ƍ +/Tȸ4nCnR9־lJcIJ?ËHI#Q҈"%eԌ0 kl$B漯ϾYp܆gPu>T2B=فKysO'߁30e "1O UBJwD15XϜk l5wV'N}(Ma9AK\eQɈ"5)C&hd[ 8! ">5el> 02hFg3 ))Й8w+ԉ^BfRA%?]օ_)y|T v;Q'fR*T24&PރAQ+%.NUxi [o2U݈VOfii,f-؉ ďCG-I 2@-[,8BLS \{}b|\m7]:ZМ֤ dNj UQDgYF)ڟa#SRXkȱvR{}ƽhT=iw&s!>Q-bdD.mmW\PZu gAK }Y%| i>0}T(Y8 Mƃ|~/ _(qy"% ʰyVN˴{7q.ol\B3{m٨% #NZ p'#&/gq9sr$shHN>i=@1'J<ՖV !*=NvwR]Y(<$uYS4'+s1i{.j=$iX&7Y,cZ\L28l•errLE~3q^7',㭮z~w֠٦m+["ڽe٨bT'lMn#D]t`B'fZĂND28UϸĄogc{,(wY t蝠*iUBp*vIK56:Ģrm͎[Ķ&(!LB>4<2:#8ѣ8e%)*;I\/ \{Cx s 8 lzۙVwNbz+}4]0vALūτW5(L ckW ckV}gl-RS[_!c2TRz@"5J`WVu Օ6+s@ *""eU]Bue~Z^=Yh4}?obt1$kX&-8)xhQ+/ZN cQP1& x IR!K[c>hKMYHRQQ^Δh\VLqj ֤Lng{ 9QJc4BX!يG aM1s%ppdSMs9jgIL%ϳ:1êaN!mN-+Lބ]9s<5M/q]{w'n< K]}%4yÎqV\+?sg̕2bV~\WJgܱ3W~\+?sg~p(~yWC9-EJ+ykvD@kEUXA ΀D@dhNcDήoӆnU1KI!Tt F~~3R~-yhb񈜤Yo!H9Mjy.|,70~x[|0rY^i|O_lxF(g1Y_NG̠1PS3l1=:J&‘Nә|=P %kFgV3(M%j)n(#WƯ8 p8'eaCΘs5:\LzԽƮQ$Q‘qi^޾/~DNFm%%(lu>rM]S[um>^q[H_B9ѧݽ͈w?W~i.t>;]:8AmrZ/aONc+خ9p ?5i Np54(m{Z> O:k{ o}eZ,p!45: GbG@O/?n.;Ԗjٮ ~X)h)ğϿ4?swqIik~Tn{q+=hr6gO{aDTPFi9gI˕ȄoI(>VͽÎXo4?]Q5oiTL׼p~S\$O_zqs?2Kd+T*?%|\{th|ִ^/|.ܿ]y.5bB b6rגFw38b6ƭ6&_{DUVc(Ѣv'K0hq j*\9:jѠTST[xSr loM)z`༷ 1iVg&rd+JdҞegl[>/8|m_[#}C7)w{:.ZKQ΍[4Aja:3R3Ii6{GӋ]xGYӦ_+uuH׶VѦ *&cWMV?B. wᄑyFWf IN7{^k_DWnڝχBQ}fΏy;Jާ+[VSs&8y.O~9T|ڡ)fUlsm棋j^"_0d `V _(EƢcLC8K6\M`l&+ hƒdy{d3VXֆLG)<8L) ұC6D۷{y~S{I?LI{mnV7Ρ aew,456z2M T ^flF Ɲ@ -4dBL#Z [CLWq\#i !]<;`.t}ۙ8w/ZO,P XeuaJ4ߴ㵛R-& ZӃ^ ʦX-v}&2ϟ?xkKZOJ.+o#o׼zF%7cwbS쨄FĠeI΄!Vڤ3-FӍݻG!#Ӌ+]<eiKuf`(73Y ҔBFZmPh OʕIB$R˹ ,c c8CBו;ns= UIU_9D>7`J!}Ux ʦx/S?{ƍʔ^N|c~Q[q&'^KDžKCD*$˦OcfHQ/cj(Qa8 ?tHmSH"sY@1"xH \[YJ\GA࿔ۙ6hzv%DS"\}^h /OUTKX<9TJh1g ԈIf֖̀pKbF)/QZ:NK=;8r]lFHAPi{4EBF)i҉+WrppnXTGZPqJ*Щ ||,u g'B[k`_DO@ "SAhDP^52Dojufj!?_r=S4TSZRO.J#w# 9=ptߙL<N&L%icHRLHRm/g T&ipH :phӒi6,^ۺl.9|'zr(יP]*_nDnСsQ/@h9"u<7!#f }lQFSxR_媳tEv]OuƔBYj\.D_ +rc}ȍ 0bo%:%N(lWiZ,B yIԴ[҆{v.L gw~RWyu)\d8~:Otuq쭣{8 Rh!H´-"V m"%ܗY6RLeSR L[F#oJAZ}VY6]dkɌ XŶFj|+/X Qm_EGEu%&;W?GooU+4wr:8䤺WL0<l+C_  W S1:⍯ ^xi~[i.RN򀿬Þe<-&E:ֈQsUeo޼z!9UFb)BtFj8#wU ?;pLr+މfJYt1;&(=kGپ}}]|Hcvcq!75rB3F*88 1X#RQB=X;V-JWDv5o)m]:6jy$fjݑEm$"cJ#$J`Fk"42 HQF"RGzD "dY>=0|K$F"蠀`<ךjjHk B"49ͱCE ;a7{X4yJ?NВ[\VVa؞b}r*x㉖gHb@?E1,~NI&1!I%0>9 Ϯ3OzBO< y4۠,ALZ'~)뀪iGQβܟ.t89RR'p X>&ↁ"jKIK煐wW bJ_)ubζ?01S_eu dGR )f{odKnU ϭT,* ^y99O#>Y@֚ƿwKg(DxU4PIU1q^)c)u+}7i˛hm:)F\ Z|e6Wy&"V8$KaXGTɅцlӆxy_Ԅiф__cC7d4d5Tq_^kMt4RKOGUY$jh r*҅V wdHyq: BNG Ň0PU*?c`cq1|IqQPצ~=FV#m<ihFgei>ĺ5oWkܪ^^)ZջN{UkWTNcTx>uUm\ZSEvf.qzcCH<#CFW̮]Ngbq#t1mfMfAn:w'7>3JOST"BrX׻&U RlWv7Y*;JG:_Fh0ֱWܳWMG羽3!A<)o:$6UwDB)KC1K]i_kUKݭv:r]w cj]DO7 {BcVlQX^u\;Nyt&Ƙ[MspYf\k)]WTeb;mj̐5L*l`UWN ]lHnCpx&XtA( \tU0FR0\x¸ JS$Z7 "3 ɤ$n-)uh0%*3IV) vRd=II8 QQR 52v֝؝v&;a!JK,c>#i1lVMLBcJwv6t6L9b(a6P!A{-3+HBE42&A$A JQ}:t&Cv\d{8˰ɕƦ=Np };blb‚(HЦc;ۏq<L:vEm1j{ v30\$rHRjB 5UhȠ=CAEA-p:$v$CFՐBk"1`&\ňQ1ITFtAv֝xX9( Ǯ(:FD#bgPtb*efI}WRW%1tR<2" y>5(=SЅQ 2L#@*%zLh4{':Fκ?j8ŏ^ggR+.qQƠlg@Z)<+6OHT[jR&@0pl=.;]Pw; mU_ȭ=aUc >$Fjwy0Po+R̡ᝨB#¹򓛼L8g8ܮv_sn]Jﯯ^ܮ췷K% LMvAt1<*pZw"N£M ~Qe~/ zp$Uxl0Ճ<3>|H3- ޜoQm5QҮnO9{ ^ կRH&)|r Al\^_(G-z^3]{ŤZ'O^r N0-Sdg2mV:o!JC .x5p ݄/&qƍZ@ ]T {ҢSsѡ=3 h~3t_>Jj1JjbE≖;SZD 18x wo/!\Z *8}]eeaoƞ4bjR;(eKkYZaIiuPk^.t#7Ort,O.5g5>m u!5m`3*Ǿ~o6ek Ey OU4VEűL rT T$&bK/-8y \xi -H/D B4~P Ҧ2f,Qt뺭ekB-U0a'g!ċ>ծ*N3?Av G6 A[1x"NԳUfQ!pV%+_$:I(wp,hTM<+^֛g7~141 2z^}.֍xZ{gBޮ⤸hx>RO S4~|g|㤈cEm/]Y} k =^G1J(mJ'&.P'o3ԏv}R|~t~u9jfW@v]Z`whU`!kXWo v]06-UOrliLCqOʓUJW1,:%N(lcTiz,B yIdٻCz1"|4X =h୍oɨh:N7N% I%\D>WַuydΛD˸e9V@ ;;z&bĎőȯ$C q sD.GHD*ι }|5K10!3zgEr0=AZ}r^*LR^#.W,1:V՛+q\jTcA>U,)Z`p۸5 w7<ٛ_&&OCqŒ+KrJO3ϒ37EyI8ھ&'}D*!h`@聙Jq]cTz|︕ !RŠz5(D;ihH$*'4z_SNI#kN)`LI e_!h^(K޽ԏw:6<,vg| 1@2uޘM_ ZD7IDǔ FXgI"&Dh- 9HQF"RGΈpDȲ|jյg`#@kI'DAx5? tJ+B")*JQ`Z$BYhoۄaڨ5g3%CT{*.%' 00N #~Z*ge-8GO>\^VcMU;|xjyM'Z&U"dsO2w I*I\䬂Nxlq =gymȣ`ey4h-b:9KhDPqGs@@մvF"ȓuY/7R8ό/?q)IZ8IHmU\ۋx-e" K煐g/w맔BnODkeZtl_㐝}zST:Ćy1BR  6N&0xْDs(<:ऄ i<7)x' SZs!4z&b!I M X!Jc >qs!;JK a4[Ӟz mӕDBZze_'+\FWUd&n"Ep߸c>cXk/sv_11/|Y}2Swzt 9{ݧ{u}ra1}Ԥӛ:$j? rJ׺Z}m#i6\W"Lau9]s7.aPw.1|>Mp-a6&7ߙSykMxl'F̻lC6—]}kޯRjԯ극jo]խKc;:gߜiiWZ=ˮԚ*t8\w෺;U$C8xpyUOuu/mf;YջoͿ((s? "$}oMmOUe7[!Nʅsg3h峞A+5qn:o}-ٻW4mڧv޺!E&CNb}Vb\mAe`LEӄXigBP@O<~R䏺ިHW.H, J};hQP̳25˕ǭk/@,') η耂O[i^iu԰z}RDݥ,:T^R*G8@?o?tnɢ ڔ;6ݣ)_hK!A|;v7ꚜ6f56ov׶8dv 4OZ}Eh]=R9hND{m}ƅɧP/+ױiS"\;fՑО;fԞ V4v&=}os'] lu <0e%ZgOg&S޶ YFL̼"ݗ-O`.-;eOg$JH=poWniUBw9a?aEtpDiq,ODa|I,! F%'"?'__gl[ k!2KsRxi"%^EX210@Ae : Ɓtв0|tA9(e^g>y|R9|0N`Լ/KoX9~]X+^K?qq7̦vvCpƆT@rb4"0"1op).oG1TGg0O+Wڜѫ:ܜm\imkɾ]2HNӝN1T NS'g6ht*J7VI`^i[HAH[]rm`&C #mz_ RW9 I(*I 0'2d1vl)9BD݌.@,{c8opܳr^?I7\ "oS~77|r]MOzP[A>m#G?pW%ӎVo|p괎LVք˦Kmܪ'hgBCx-wxd *%Ϳկ<孫ERM^] 6,7d%&bv1Cqfu_#Z, -+be@FX?`2[Mdž8hF?z5D5jڣ47 q>ӵ Ļ(RP.XC j 9>]@Qn7O̴Č>טqZ",6Nv7pjAp'S9H׆jke~2#}(dFR-fDRJr3fWv ڷ4k'-˚f i%!֊?\w>Yq ͅINqǷ$~Q{_MuzDT>w8~ǒ3a),Q)bpûq۸gF5 t5+SңA-r۷D7}V'\s)U >Q"JF&Shfe9 lP\'a2X4EO8%3xC-@rR!Db:Q"u;˝^iTbLO)kriW\ZK#!rPJ(3y9sS$mp-܂hQ.[@ H6Lϒ"8hQI4FkML(} wWԆ\JG7R͗Sɫ-&'/=vL]ѼJ]2kڲ]- {s~KxK/K`$F`qAInEtfp#. pkIL(2ÔP'2&k}z zd4) !*T J#cGrJ1,,&a!:J$;c1kWUۋwf4Jn_a1w/~Ӡrx4_rQ/lB*#Z*gWN:hdLH%"63/ uE.C\d{8˰ɕơ=Ap c;blb‚(HЦ0b~I\ 1OIǡ 6?ك}a0\$rHRjB 5UȠ?CE-pϊaJm!gjH!pЈ5P0:Q cTv0~B6xJ&=jʝ~:;&ڦTɣ;eL -.hvRiIw1"cQv]|uqќjaf/j4+Rޑ`R 3 ;]f8^mӓ vcv$xx ryτ|Ε00ouLYc+*,BxETA!'85Qf- 4F:my+exWnfSK"gMWE (0Ǭ91)#{4fD+:[S=fˆT b(MAqU,ƍk|@RmvJۿT6 qyå O=cZ!&+Xp4F B0tW![\o\Hy0> l ґygEI-}݇w1M|7 >wDWA.Kt(;\TǐM:;%A¯uljKvc}.>Iб<_zN4RokWxVـec*vPIcJoʾ6 8R~ > .?I9m) (HIs8|nh  c 0KL=:PW쑜w};B[@9cD^!|0% , [AJé3v KB0@?G"`oJM1U$QH8Hl*l-ָ|4~?&>]prƽۊ/]L6mvZBK];8znWe!=T0G\%2Y؆VfIL+bVbndf)tY8/"OwLIm *2kUJ=5!J(WFtB.C=:CEӣP!FHAV3g NRiO1L\tZiojDpb:ۦ0;eʽ3ø ÌМ'jTLh 泜xc(/ &M9;4_2j'4eS^ALW*,{Dh)g?鑕"DŽi!sM,ApjQL'J@< }D u"eq]EnYNjHĤ38((J)'HKL$D!i I U\f2}8L Zx^6嘚8|*H )r%5!lʁrVd( :p,e^E=;B>I#GDJNcRɜRaP* M%SF BX_&%XͬlDS1x#NSv h>2 'O/~ᩘQVҐ"et, b0Ә~`[` ؄gQd)Mꎑe] ۗa&Y?dC;?snqI;a\rMKS hsɦ,"a !qI5V60k(1dd\YL^}ir=)lOBT+zS_!féNV`Y&4Om>Yq/P9L~ȖT?O^U bbmqm>߻*ז EL+3/n? ^3FMHi8o> LN?[buofr4O&4j\%٬! )rӛ fEE u/ؾ4d6W2p|]fM2Wu@grN$ݳ4i8vi71KvS|aytNܩGYoP>\otL~?w/޿¿{% NHʮ$c/>kuC 0lhCK欫f\Rb"D 4مLqꨯՑ=bbaFI&3y'P\}McyR#(wpTX?6@:; T#qR)Ĉpd4Zye&H(Ϭ'SmI¼`?Br4绺|{]k#8bP|R&h#%)(xR31XU8hj7cPڞةBc wH*<2JR7+jDם(e*p^)gym\Jw9gfh~e~]6N˦P}Xf0K;Rq"S.ZcxW>dl7q>0Yڈ70EN93X^}=}!ECPw g>Šʣe0 #mJ9 Ej51[cB)7j<[֑aYA[R.k%!RHDcuZ#gC) d!jډؙ ݳQꑳ>Bp3X 4'Ԉ}`RfJtsyCH3h1KѺ:/5R}JYNM Q-@EЂ,rb$ lΗ$1=*4I [Jv|I){^dY|"I i_lFlb?ߦal(O_C֨tpgo25SNf\O/&jo"Y1EfXg)r81 .|FSgjUo&L_ݛׯLrJ͙"<՞4;˜20lovJyM=_iO~u)/-(BhVX *LyT1+rՅcU#ѩݝZ2M8D1|WLGo$f]%^`B<UX̲ S띱Vc&ye4z\ iѲ9[F.[ 3 ػA/-6Nfo-:CiC>:f7t]w=M>mH%mMZ&)rLR7wz&'@q1iwm:Sl{XdC62,:9̫ݺ}7w^9\msJ47{>v1mk͇7UQ B.R\FBQcOryV;J H%y0>]]1E4!&`$J7xbiqJw(aiLmZ ZA5)Oɳf]l o(vzK>*b\>P^  6PD.%7Q+&*By_`(PJ(ڂ>q#R3$*KW|3%ҘM+>w,.Mm*w]0+$6lܛ & y]4Z߽zJ2ZB ELnld: {SZxk!ƹ{oq8P#F'[(8vJp`঻%^ժ|zMUjJd.r68{SzUcũC^myDk_ YP~aOs@mV6+JKm^f+Ŏ?,uYT*ݮlu0pzqal:[*< epP9+SZ}sF?xP;+N2$iy7\f;(# YPJRXm8%|Lf96S&a5v_y+52Eٻ6$W &tV}d<, f%%g;H][DsϬm"OwW:_TF\U)!C!Lm޴A5Sk[gk2${p'ZV&:?%)(߉]mJ͛gTbi ڊ> RZY-PRԃKԵ]B"s돈 4Oa׈fhA'-Ռ Ӯ:߶]rkמ၈fٽkнminmS1|0Sm >EZYg,4D2R 0 ǥv 4fƈa4Bv͞QG3 < 4BI%D~/ߞdCUJ6^u*L%T}oFBާVgUA27-G 9ՑJj MQ Ar#i>Ku}2:)&MIkNֹ[cOTs̘HHI?̷2&dB)Nqȝ`1vҼXmlk!G]чΨ|W!'c#ZZal%EtALڳZaB BQ:K+z o'0Lwڨ/ &THƨFESu (b@?5RԬjځr'픫@L`6(: xiJS'Xm#d*zSJ4/9fH57]Ґ?ȁ;1mk Da;9GwŜI' sG!!.A BJ UQ r 6lC@WJokLEwR )tiW xq&Y;!JPA/u: q `,+_CPq+uj{RQRA}kbKRt eFjuD$J(k! e 1P =(N-C@U+cG=D &o23 M/ĥj*p BqŘAQեTei5 !pBK#;R7R ]G *}tl*MH}1[Zm\U&1:#'`)rE ]; ) >@(E&rZsȼb#XÙ.mhZ%f0A 9ɐQgk#ԭHw 3xh}`QՅI,TG7>Xż#՝ mG5juzui`M0~Hv?<ݮ6}dEUKqKе2"1wP:h{ 9(P"QPG݅ZR28Œb 5ܘ jcY mM!gTBؚbut K)+ 6E[L4f@2zv huj 3`EHvFFp;XdЙ$ fZvZU ה!P?Amj DQ񨈜'Pf=`Qy8 衲BXkYqj(crf]%PF;hV =i 5oVfzJPk M譩+ UDc+XH&-w;$aS6i` PG÷h5g L!j6v` n9@׋eM\=89?725 P.n45BgekOaC5vPYBHu:5WZsLڌQg5rFCo bLPO3zrhP}IeF٤#väDy ؓZN!F ":I'%\͝0mE;XLMY@z|qPCz{il+'~z+AڵWWSk:Hr;R1TDyð*aquGeQEҺb$U7ĠY/xp`\SclciaМ6P76xk+fnQnR'kҬUVm(5{yAgg &S@PXG%gӮAцvWFRM=!@քJVAP4kMyQm(-(4(s!z~Z)AH8IOZ{NlOR- Z.!b!c-ٱP|hśPY6o.Xv꤀k5c4&TԹ]Es-F(GgAApՋ7B6o 1Ne,cCK`Dx{Fׯ7~޲/պbyKeUDu nWܸW{٪?o? 7.M*9X{3' a.N F Uqi=8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@r 12''4'SZ8(}'!:G $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:\'l99ݫc7*XYě/ߝԋ%'^}3x ˢWPfdu~޽t˜t_f yHG*^sƳ廿O #onқg9؍Iݯ#_.?!%m v8/Ⓠ}}tO]M^fiW&ӳk$wN+mVQtueP]}c~towJOZ|%N:>{srATr!{577(Bw +/~jݚ96٫엯^Tj]M{4{,%f$o`Jnf.RѺ3JD}1R>r+=9t_v9tϝn(/;Еzk]1fCW 7:7BWHWXiFtŀg{OWr9á+k2͈H]14bhiAFiҕ㝄s+v*̆nTs+%Q'tutԡ(㕻mК_~zK"fn[[qY"=V!Ww6)gN6;*Z#70?dMfF, ٰ445iYQ:)OttEj5싋|U 򶕊)x#O_o^XdYd^v:;<zyyվ$>GOqwoOs6|v=a/}ƯU [&Jߜ~gyjq۷^령ڍ~_<.^oR k)aYB϶e 5F'>YA~ ?f v=uA?l s?]~R;@d9=e~>*M/z]&t*v4'`2z6tpl_ZcU*[ĶǯDSpm-iTn[[jN a l;*@/%dʗ##޽R~=N^z~7w6Pt儮:mҌ 7;]npq.tIӾӕVy-tuteW)̈X0bBWwCҕɒ]K)l~OW2+tio).Mn (7^: |)^BiҘV[| }|/ c~q~گS_Zce87:zNi/ В;4Oc7?3LWO"eڍ*- /q5ވ3[~5#oFQ i,]N~rKRʣU\x ~N;ߌS3\=1h!anFt֦~.th?JqȫUFF;6!Άn;&;]1J'b$eiFtZg õf.th_ʄ *T]1D ͅ{$#t=rJ/>vza auBKj+>EWؚ[ f\m}9*{WӟZ77BSE~ene) oṠMkv2.P0vY4a#0Z{0ܖ#)ym-N"X>ܙ,w#,| *6ajV>ίF~֥ J c4N ߻7t |܀c󉗤s{_C#|U_U7yljMCkWYV%?+*KQ͂Vm~cbT*^,R.(hezu&k3 PWo ^]0-jp= 3HߌZ|T5lzhw5-aal=m 4e*O5RmZ]ѕB79(nF6H> |q3Q7OREg@njmSw>:٤81gL{£7)٨hvxی>sϛz2"A6zYeMY} Ra<'_Tƥ]C?̒AiF9t?ȈR4cK2YO#[ z6֒$Z2M8PxQLGo$f[6,C0w$ )[X1c2b=.mhnDt̆;g;. [56!0Oo")s3xr}tbks]~CK__q[ey&>w[{ݳOfejM:E+S%鲉Ims)O81ோOZA RO*9bKunn2CHAlgy|x{sC΍a:oo=-ަ󀶷+y3ߒ|wyFt)M5_$ݜX-ɖbdr-:_qVky)G7)+J{+w\:c!iT{bpL_01eDQvjXHcdh CL`Hn؈ⴟQ u 3qqFWEZ:Gu_1'h`JV;n$Rj_l xѦ3ӆ0eR2xˉ>w\5VF,H&gL i. OSiSs㞃G7K]O_l{Veb[-2B P8ƜR qR`T::BCQH2k`;)6G:^M LB1jNoQRk kYؼ3qv$ص 9=`x#$ \kf8moYm m}ҫY~K,hY‡I!ET.P&y#5V`:p,e^E=bsIà @F#R"%H"9̰40x$,"0 (O ΎBcP!*6]Q'48x#N~}jLF ::pOv1C;Rv֓]uw_A(ƶ?#.3%ʆ?g#_Z$ͥ`c\Yo|S1L$y )%JPL =O1[OC.7e<=]+ECoLW6H3 "Dl.Swf"\Y1A,t6(85ia6zqPB |V)>ԭ'1)&H-y~Ht~ޟ]NGe:] &NE&46 _ ӏwP&?4Oݫ0#Z~|_xu5n5H|q4TmK]6EjW`׍; 7 ^-%]CZѸBMeyC>.(hԊɓ2نi^96d[-uf]Z"T4Cǫϯrpir,)j~gk$}}/>Auv7ϢE*^PJU R&PW}|{j NLh/Ufüzsx~]yW߽??ĈK00}E !mpE UPޢhEKe K*ra(l `җC h G-HoH"8Et%8ca24,f`(7ǶDyGJQU5X?5xOnR|ǣa6N*Σ+KLCݱu1zU$̨HiHI곤(%HN NMi ƊtUXs|)w\%iJRJq+JA(|m}W\],vq!4@ΘiTW<*9f>*Qo^PuR St`Ä2:ǰcL#\tX8딉lJa^iGjd^ B1rvD;g;]'sp Β'>/a کW]yE6x"sD8ь?:CsC Ee&]l-(ڌ-#3M0&OdԾ}҃O"]HWD:u QQ .]oYpu5oC/Eb K<e[m[W}K)-1$7j'5Xk[~wem j8%uٞ - m/(l0'Dy6OrtaRJ} as,] qE0"3c8f,!LPO>qHqoKH+ &>OHC 2iWQ^ $!š@\qVFFib`]&5WkN鈮C{FYhh7_W>Nft=ҽ{:/˧ӧK,0CCiGF!&!]KVcښ۸_aػ˞MUNNj- D$e[I忟̐")IQE%q@ Eb4 :rD BCKp 1zH$pZSM `-aAoW [[b}7mAifDsiNd{hOr1ATs铳 {23Mz:<_h*AY) Z bNeR$m S&J;}u#Apl%A  >TZe0\s$ Tk/BdUiDka]T-:dv[␕MD=H)*m]b< TijV>;|ْDs(<:ऄ i<7)x' SZs.!hLBh@JCj}B0+e,%{Xnuu7:>A]GbWr`T0*U&"Օ,y;>ӟNƐیaVcp~ei>d\c]Sqa @fjkBv^oӽ>0nk/5f=[t4t q]ҹfVM 2~KUbr~OK.Pg9Fus~SF>|j\xRkSAV#ى<ehنvi=DWߍ;$vKyT_)KzvIlGoG*)ɋm9貳}p|pR̮'/ػAn.'7~:?gg;ջ'//s=5JeO4a_o[wӥjZ0SWMoL8ԑ ki[*G:Ϛ1\t*ӧzfMV+r0 \Dm`d'Wsa@.LǼ-(Oۆ{SWMtI>k~a"F-!K#1Pj/\bRPN䲖oMW66[|fI -;hsoĭ7E:bXr A,^"b}(˛Wҿmk#u"se3+/[ߪuKҺ{4 Ҽ2 ޕwqژq;^"!޷7]b~7-w"B랿j7HFsH'Ƨɸb?;j|r۟96 x4_31cLB{b8oV=A3i \l72C̭>&m0y;&mg>:6q;m{̐5nČ.襻]3St||6Q@x{;s]witz؜D0 :8Ty4I'оQFO_C@#LPғy9z/ѳCxJ0hT! (unW1L@ ʹpVHÀzE@/s`HgAV);fGd/L>>yT DB)%,2D: ShfeY1*fLlA L } {KPV'S.ygҫ\[-B!<"_Ymh2N_ej)w$DPeFpd:9"Rt>PcNi/hC@dȄ K"8hQH4#2z sڗ"gjT5BF>ōnZ1>yV`2:j5a˷yrRduV=Ç/j zH'Ӑ=32lrqhuBێ ," $)ٌ~8s&桠v1yaG>jӯ 5hJ"/%&P^u 3kYL4(& &2j(X3!WGI0Q ڕ^xXS/ cWDQ듽)܃1hFUʙ%Nj]JE[p|Ih4emzk(AQx 6 ɣ,e9!j(hI3TNFblFďKSC:,%*G\\ H+xEP jyKTʤT@QBN Cbܱ+xc=@؆&Ы#7>kgظQΎ[8QheԾk3\ Op{glHNUQX],E( Ȉ8=y |"yv. (J$ϰ0/OW4{o%yv/M Eg-sW?;CY \#ZJ N0-Sdg2PF,7JsK"!QXW#3g;ndf6:h< 0V.JՇ} &!!T=am]UoF=a *I*DU&QR)-]2>IlQs4"}r4H h|5>&M)TI8Z:!N:ɂKpGT^ $r*f^"-k녥!R"1DҀJhh˜bls-֔[_wzgT"F\fOrxs$N>syۮvcΆSmB *I7}Q/R!.#J hBMMJ\&X &GMMxgKP"Fmi1{[ rÑ 3\Ĺ ೧9;jP%$GsS"CATrM|Cj 2N9H` 9_[ hw&Gj\ K2R谨YjƨƷpC1o}#Ő ܹ4Z&XJjm`k@%N>X>C-k!8 lև.ti[:hazvRAoz'Fpx{O?W=ɯ=^4O^l: =knI 4guw;6,7Y.\XΌb.%\Z$w2"_O%L'[|',|]4x|Ҟx>a2w&&jQ.[Ns?ok=)e H剶*VFX9ɉrޑoXh4`w=)DKHzEkPUhH'*0O^$h$.D]TyoOLN' xE[zt%k/[y7fh1^?s39olUx_sf22q% N)i@xdCYQ<2Pi̖T&o ,ZGp Xq6y>LGfL U t&blz ǣpٳg+__ƴnm&[]q%'8;a!}eӇC!}>}H>OҧӇcEq'sOi"He6H hB-McvCY暋o\6q. q.\爚RY ^K.2 shIp8Fe1L%i4E)1D᫂6XDVM~UrI&{`O8DvBv(L=ˤUKJ@˱8H9{XFDR1`:Jt9KA,/Ƃ;;ӂ3f8!fĐ\,I4N$*IXnphr6B 'EFn qLEVX$de;&΁vտ_SPT|2y ALOJlNH=0)ȉ /1YNo>I2`5MKo sX)+먣(sI\-҉ժj)?oq|2,xꊒ=i $2iʤ)gh!<]$kncawNܪw%v竦FO݊Y]An*Spz8Wiz}wrG|y 3i+b3KڃBB{z%F): 9K.8˱ PN3LJ|ɵ}\N.jn{IS`;YEo`B/$+] ,K Raa`xr"U2'Og".q'`Cd`%f5Qi 7ܲ1f<Y+P7)AT ]2F E)EJv \.RdGQsfeHa@~slsWeA|tqE)/]w츩闞~I AhYQ`+0LʌpHq<,UȥPF9d^K_9Sje4 rS5Z&Ǡu<!YӋӋƮ`'?M4 xC/_2t\iS+i]ap>Ӧ }oޓ,1!c8w?^D>0_3_'sИn6PSv cҽ>ͯBJSUCkD:cҸt%9js顇(gGUxgob`]rq .s~F8LY| rhR[*ms`ߞ``Ծr8/I~۲D{޾nlvg= yYi_maCX/ն 3AG<_˟y:x׬^ -{L^[1s[O19&r(]v5L-Ʋ dmռ]+ 賿Gpڍp]Dϫ u׵xtQlt$aWVA p>6#՛)hOYWWt-)Pkz+SoOǶxEpyZ2=fA#Эn;$VǷM{xwVK=L5Ɂ󬌗!:Đ'DY}BP-0p*aĈ=+*09"k1 ǴJXdܗF̒w@a*W40I̯*WUOȯZ/>/,3u3F-<mHj1<$sfN+1mJⷭHY'A' # L$!Jb N%W/ tQKV2Q,,rpR!8\P\}ITХd8 ٧dWuΗ},9.=If\υIJ"'%r"7)eth x]7=0seς, *[I/zyDd$FH%Jfc9\Q)y*ZTMOt9nL=Ń׷B:l2Btq/Y[Ug@**WIWGx:G+!0!iíWx'rf,ZK9WEFr2YU*OM%_,U, iRvA$ `\KϹ j#c5q#c=_VӌGa!9J&nĿ ek*qϮL{!/A^Įm0_/' GlH@p FioS^L6JhdmĢ΢+.BT&֭L+@{> &CRؔЁF 8L. p3VFjG0TPv<eeԖjl`AY<Bd2Y>ū>Kȑ![KqHRhF[ATì1!gHf1J45  Kyh(‡#h|RُMP&`<DED0  sSj)22Fu.p ŮRʤ!elDZ1Ⱥ1Q4*e !e2 rPWjH$HZp$TFjG_?!@Xw9jZX\ԕqQ 83$ޢr t6hF>OܠJ4;p)8d$GL8.մxh*#4lϥ6sg 9-f@߈ُ= Dާ @1~ f(meO<&IPx>O$N$*;'>@Ep r)ˣ2D+1}N 2]@#YvɗfAGI8g9!zg K+H!ieG8lw uڴVWZ:P$DD/a1[#/w(x߆1u}v6^{{C?Z4Ʀ) *?Xi*]S[eYŠ~9j /(o)F4}(p.J*IQ*5)C&hd 9qZT^([UZmH7_{v/TsI2z(gFhORp Ҡ) TomzgY9yo4|pU}oz6Gl2J+fy4hBR %R7`訜d9VL@-[*t% C *Vw 7 q5-v"Wڞ^׎=^X_OY[gOAL#y FS"YGY.+RA,SMBr;dgTj6T(r=|֦%$p~8g_UY}7}ohSbqNǚ_ߋ_>]N$, < %z\L]V"rsAmgYWwT;eL[KB: 8}b>MNpU y|߫^8N&\ϴuNRߦX+ze,KƐD|8OUIIp䓝 Ʃh%3DʉQ+!mDtM,&v,V`;)ZtS„N9vNPFdQoep:\d<`5*fm _*LJ>4<*:tREvi8MQߍ uxБ<.@ rίWp>56/?l8IL C9֏-c؍52v[,I6ELW_Hr#v(!cɖ3 .$d,h۟'dxBhLA&$ XCt@}F͌vU0܈ &E2cdи%Ի1)lFtJ p2&J"p $-T 5o r=u<.bLyf?XwTgLX"x,1|扚`YPkƦSEB1(+yp[0<[W !FsiGReY@jv =p E%grC-f MKSVQfLN ˱g|Jg4_3@j b\)+"i>|֪RNӄL2\.\`]mY K.+RW+xT~1&r,d3f(k8LHϮ9h!gn6KW~l6G:K#KVMسr@i. B5ڨ5 _0|K/JyϘ~; K2}\Ho 쏫O*&zv3o#°3U8+г]UUM[l]itsEI5x+#`N˻ˮ~Ku᧛6=`nqy4ItU^+Y~ÁxRqL>^ëY^_9N~ꪒuO+Pf~jȇaMJ_o4?gwn[}T8+dzXaYD *%ajVjVSm2 73d<(ۊoL߱+s}{†A^_ w?O>2n>w' 2O1TσCwwh-z4>ohad/|&\ܿynI y[p+ 1!u)lVmm5#Y|:qC%AeGui*Je)"\LGzC++v!GBmn_ı[ҚeƤV j.1 ^&M\ϒ{g hfcg7mI0X+f ߜ:d{uDV:]._R.-X1a(/R)RIQnF$=3>q<[D)|bx%LRG('(ywRTÕBJmrl !QN!{iOQlNZy JBSH՚^6CIpY`rz_}&[حol^bҐԛ3os85 /5< ЎJ#QI5]hlܖU)K]EgK\istԢAOQf%J 50R 2)h!3# XQ\M5xYsͯ!x>VָnZ_{|E٪j]n]ߡuftE G2^Nw/zr=嵝A>mVbHV^z!pѥK3;C'WnwW/ws<{=fU IN쭻ۛ/|!n7|?孮>|λ8v6⫏~'%Qۨ'+]sUխI<dcKc#XC~`rm͙V6nO>7F:7q" m D^ֺDEA'1EW8$io%{D;k]"=S";C0# =YYjlZ"7)P%x! g:H(wFlBJ2 beYZie>1@\+qxIA$s<>תky{^`.cCSyvo:\c?eŬCSGRl,ۮ/d ΙPt'=RЊ)(s`!UlCWWu:8 Z)NW=] ]Ia@6Tt \BWPmR$Xiغ7,L̲+eӷwoe.:on`:/n6O( %;Ʒ% ̺7hG?^aPϰ,V =@(BU?"BR %RpPŌ:R^a;F*pqJcyƒ kUE;fk_ "rR.h?rF9F@-[>H!ZYl(hGeGF!'YMcR!d xpk.p3KRv1\P^ R !;DWXBw *IBW-@骠 (l:CWQ]+D+XwF ʍݼN K 3:CW.tF]!ZN[wRP ҕU.; 3mN JN^]MgTs݂6/w$@ZzJٲ+x]AOW6=JY!Bxg+tU*v*(tutŵ%u]Ru ZNW%螮NpEMUkuW jz*(۶vQJTu `UBӶUA)lOWHWʍe˖oEܼ ֞ڽ4g/~& KRv*d⤮^HþsY @כc'Ÿ`MFM\鄖CW %'54zyکbj!'0vwPWAvD*w;UJhufj7LKTW ~o-u=@߿O k÷:שAX:U~uo7txgKS'#N~\-E*ZnScYux;W9qj1h"RRp -X8DX@FPZ%nGrSz\zūR_\ux8n^ԥ3/1CjlcT^϶=I )1c2A7+1ToSDOsͱeR%9tu'MG>9V .?c׃6* Pٝ糘޼W71PI?/L~pE#eI1)|at5r^_0W|7W02tVOUҌPЗjpUr_1 ͠{MI ^.Jx0zzOr>5^[4/`t0UpgR NJl~< ۯ-|w>[n:az+,Ulx~vЖ;4#, yC fl1W RNaR8W?Omr_10Ṡ볡e׬2!\F,B0a<]ZL^jʈhA h(fYeLDzRm 'Eꗪ}S),G9à}9wbF.P PbsiEWyKo& p0 ?%*=Lzu>%% Ѹ=qee3H4jjL=IJ3ڌϠ]Ίrg9c.ћ7wCߙQ?T@h3x:ME::;z~hKx9ԓD&0 bh%+6>Hb"X j-X@)ѭ`1VYE 62( F刊`) NP*:dZw‘>_~ԁf3?0\o5'rhrS>q>2cyN9zeI0RnYeoʾq&R5*c [<4!J(WF 5LH}! *&#oB Í `;ZCR{9 3'4Ar&.:7E5"XJ@u4DUNq`q؝if ha[0ՄaFhNPƖ hkp6Й^>YΔsr5åTvÆOi^)%f4v|hil=K4USs{⫔qF[% ՏYTJaBqƴ&g85(I3iT=]*f SD9iu&+ߕnh؇yò~/;,l(óLm{4Y_ev[ fݣض2ࠐ+1TxC{".`"I$ IQ&xM"xZeHXG"\JlB Oc*5 h/<wkYl58[nr~@T=~f?"! Vh ަEm/$O6g`xy-+9)Lul"*QR⼑rن+7uYʼ{a%—ddn0-ra3HitL*"Q* FeT$FD00<8ܮC*_eOz߃uGI9S-H:4qI@igF d4E Iڟkjf 9aaFS%DmBD bbԒiG(@*z_s3}HU@Q{" \Y%(",_", bTa |ym' t&??.Zj!|k.)0 oJO8&|gk`7%J=Ok0EMT+xkQ>tiHnnn u/S5E[L@ ^u@gwJ`HVtb ҥq]%\ wg_?&) r+ih||I} )ɬe_?+5bu|( RsUm 翮jBc`uL!aqAJaCgNԝ*yZfro՛oerTl!WK?ej[mPKqybZVN36biWTP@$UfzqQ_wow7>GLǷ{o`. $8Mc!$)mp}E VP޼hEKfg M*7rA.)0wKC<4 \79eAiM{ %U71@Z 4mT 1"Gͭq`#39JϽ0.Y9|~?UgpmyU Z1) 09@Z,R31XO;la2=Q|Gņdbem԰YoǶb~SO9x/ٮrҝJ'5ô/&Ŏ)T'TlCؒPy 8b\nbf_#1X6WL8kHB:?|%gS4%~:?tH^o8mJ;{Nу1ͲZ/MFyJ)\+CEp 4XXFc"Z\H-FB{b@ي>$̈́sKק@1m;{ZX/ìŤH,ߍZ^Uޥ@/Ȱ#@/0BgSPЊ"h'zy2%1X%s#e6qII"" `P f]z)C0l$ )[X1c2b=. X<ㅴDL/[oy m838n^Q,갘\Woaq񶘄,xH34!,}ѤZIg "Ő,IDSTW!uY9bCu qs,3fҺ}5|7g>"祔C hߜçuFyEꆌg7`|-;~ X-ϓn{l}&[Mg߸}Tڡ~`r;{ }ϗfQ'!L{)c.ck!ڸcOgrYV;J tWþEB."xpPQ:%2MVJ`T,=JI$qqYu.B.ZN54/ﳃ9dwum;`_:;b)`OTpS$y=I*3>`bD )d =Ǟ " %cbiq (am A\Y 4:8%gx)+5n0M,EuQ}W W,r]dRŅ8sX y4\v4Z~B) 1I&%$#iF+IX2yJ5^ V8G+Mfch#Ta|eV-8ڐh t[*8Vd'kHnEm\*}l_7=PnjBZ~TeG9|D49Aeˉ2AE bmS= E>xGwEYXs<^]ɛt&ϠMHT܂0rgj׿O};cZt.VxB v\#nJ={0qcOJ[,ڕr21 9 XϪhs*&# 4 vִY}%9G#E(tKU듖{ԯ4x}pc&[¨5ZlHq"-Dx(}l !vt@a%RZaJr Q%w^1ص:]]*1{J :XyD{y)eS$e5X+r C)Z*)Jn9j' IG+bK?Z+jnz7E~zBL 1@ʂbh ('1PO Ue#"X޷r%`9ljdJpѻlIvN:RBep$mEmQF& m' ӈ7#jTx}fw3|;xz-r OΔTQ`<*S.DpX㐑FyXTbTz0϶s:={#y8 JlF)s Ss%_VwAHDe}5CFi%JฎkF 8^=2rw[= 9 ~9ɭ Xp}xٟ:Y消𝷶rZJ'H%]/<ɥA8j[o}er‚aΩڽY/~=~P;x4f=?ܓ*k&y\(/K{䜃%`D?-zO< imk<9ZP,z痓w-~%gKb2'+艉!ELu8>'[N?~a^~u*n UUM㼽Y ;No" -L~7[ KϾ1Y4e^c͵b<|8qô>D#>ܰ6ɢ#AkFpgZEkה^;Yeuq Wރ{}}wZʄd8S…S._f5]ŀ&d?3MNO?~o){ Q[E66v<vlS[ňCw;6aH3k_kӶOt1>N> hpsn=lcN A7h~8o9p:d^eÞCy]RYJ-V1UmWi&pˑ#],Er JEXN%TN}=kHM&qɢlA"k։͞_̨1s]]Y ETYHA!A aWzƖ|((_øpjߺzXHUњ:շFO29~NBhΩ *Ti?q&@ŕ<[,TWy+=脹-˅'H)K CQzPHBڕ`WĶQ$(/|2aR$\%Šc0V?flXx+d= ~ʷȨ i\ʃA0dU0FM2,bAPu K \.RTYfN6{6L [+2gi/(a)M8WSROΦ_z wVgbl z+C17E/*xҵޖ5$4뤏E`&+E0\9`*U) o"OJ<鰄0R :B+RA5263g=2Uaar$[bF,%qv^#˜̾z}͟oGg؀ YK1hEae4l@l$qm0u 5ًNIiUaS[ǧN|vNEA (TDv1b73g=b\J1mQ[7Fm=ޮ^;rRYd >T9_E{@= jXs 8CȐPRk)k>("&,jruCf>_,*0 "6""4FDqDVEH{NbT92m!ǹ+g UʮXJJNmN5 h Rq&U KV:M씜I+p:: ƈ̜N|:mq6E3∋' $|c\#<Hc BJs .\ӈ}fc[=O0-in2) {Ziw/Ke'd5ߘJN鯿{6ErCA 9(lF Yڻ`iQj{l֝DZ'ܗ봺37$طu"޾v5=u=s}uOI$`Pj,A%eT"hVƒ \|rbߌ$O|8}mڅgqV\ 0+_RMi"C; a;_3TL(auJ˲L]kݻLGeQY?VhLPR#MDŊdqe"5z!(ce J66rB 3JQ{adۙkz l걙9Y s짠"K{Ւ#ΖN|iEy˷է\TpEk|ΜGJ99ٴhDR;!1VU 6^d$|.N>dHG6d 9,~M̜ 3CSS E C^[S]QRWcBT&-u*lɗF(- tbzL2R<:^ %8^JqX JS0khگ;RHU-#2X2sXV !+U*d F#R% gk_cՐ\[m[GZ RKВk79O&?,]os=O90@мf \itRvM>w.ବ=oOM-xŀ&:/C.i;+ruy7,n,Dhjvk W^b.968#%Rgg\.h rjD!9ben\j*3&NkO}WݶNdz| bH0S7WpD)U'$MZ^sH^ * E5Ad+Dv5:o}]}7\MxYz&|`fefwc{qC|;uܼN1-2A~5Uu$)U# ,BL ]QVn*1'ѱboKTz`mMn/s\S[GtOY8k*yqLT4z54UW;kwS)#s;ϺԨ:8N&g]2kh|{}/zg'OVMWXzݍ pݕ}sm$_!E:Un ld j$YҬgm=$\(G6HdVթ缧=zy"qLyY>;sPy᭺m'BVDcޯ~{{v8fkQg7w/Muo/?'ЧEm5~,x?++Š3p:v\x~{}uޟخ۵[8_d/d)Gz gWWkVbn+]{oX3 rky A'ۢd4Ȟ4O@ .똟@v>ݑ#FrG QH!*HqwpV9gF+G+ˣq踂ʠ&\#5Ə Wb4!WPk5 Wr#L:\ IJܞP~it\W83,(ur X.iZy{{y}8?SpwR<+7NNfoGEI1Mrn:-uc?^] {#aߘr_og?{O}$h2_+OWīdLָ&{ /B@nN yQ8ZDˁBEt}|t|:w2볆n%%'V=c|df͉2~SW4]CoNfؘg'n;1>kMCʹD{uA47 Ϗ&UƪAQ,E-[5QdվnCՉr~j_/=~飌[pJ?RqvcӬG+hpXp%jyL:B\i܍WL_7ĕ}֋䞄+_q%*upu2^,WFo;bP2Wq%*pub]S1~DǮDJԺ+Qh1Jz{CgZZ>|2l"u-he|=ӺGI ۸K-xxW.{4'8\\ǂ+Qq#ĕ1lDO1(rhA#B|1(*'wu¸G+6ΎW"7XpLwWrl˄#U~!-Z 2L?SNM8a2Qĭkvo5qПϲW |5lm&}DgcO<#3#2H ܗO~jyfXT8#4ÈHc:)ţ}5S+螎+Qi[̦1ȅ`i4ьWP Wrhgp, Z 9;;D.Qq^MgFWvDsJh]Zㆎ+Qi'w8D9z8^پu'/|~j {j`O3{LzГ@c9KяW5< WPkXWPy#Uplƃ+ȵ~TFpu zk{Ь`ֽЎ(mfm\Z - M[\"znO g?? t ^7?]i߭j.Z)?ܻo?j:۶ h@/_:?,6DA7bvw7ޙ[[ofѴ&FBl)ot[ws0:k?ю?70LW|nws x0Sn-[3K Fjl^|d'6U~,P~sPE y0{?[a7 wי);ď34tpO_ߦ_eu ݴym|wYC7?%mi-i=d7rdL.GvFF8*vB0 ˞n~J{lv}n? \SV 좶wwr99"uW#гn*d>eKI'b6p=jfuN)4j̞Ѕ1*UeV[rT{U]*)Ǝ¶&O%~\hn\}" L}БblҩL>V ڜHj}k5D*dUɵ`0"17hFR)mv>kU-6 ͛TbwkK8߭'fq6H&k2mmĩ)cĜDh{mXzHBuصvh춸1kcE{))w$<ф`z^_luKz Jmwxy2 IWDISC(]Bȥa1I{'mv( Vu z]#6ȏts:O}d|Ht-őcFgP2d]șcq4x}asqwnn@2 yrgk*uo-s!DAu)'o ^ܷb8$K$ѻ);R3B0Ǒ֑5~B_ko(-RrjH)"jaɈX8Cpj6z.WPU,92͋ŔbS>drM zXHad@w"R*VڳkJkCv:vaV#_f G U4.2HyH3rS0ؽj[;y9x9Ӷ;`5GeA rܪCZ] ty6T:8քDW6 zPB2fCl(vu\s \M Qx"lU#)6fa=g% 1*\.JvdZ[ELN=R1C'p Z-VLCjD*1%V\u  ns[2CꛢDd8MJt*TSaz@eEg dX22:zb$-++eLAAX[@tHFBYp6{4v&׹;f[*C(]gԭ9kne)q`&boS/%JC%;:P% b-A6ud&;9Xi TtgJQ"Hq[ŢA(x>hϲwD"=dH_(H~ BrFA굱ebU؊.k >))"FZ.{F3yZ 1,$3' !5Pj-%ч` ev1aKҰ:4]AՊXZIۍ །A u+^̈KUU#f#瓊1Qc󪈵t"B!&D̿h{0bg7a r]<]ɝr]-vݸZ015,.@6=DP 3 o f9.\Fætd)ҕjhJUaİ'8$;෋T,J茸pQ4x$ Lȼ`|CF2].== KH AɣN1x$ۑ ox>A,TGת>/AXż¶0VA;]0}y~WdT>B",'Xk|lDHc ѣ.ɧ6sP6DRD c =n;X XiLEQ{Xt,a3m>%T MIJw 䁀:A:Zx _36,-D{Phň޲3AKgy<4 r幤 |d!"hA!3S*JfWbe@¨U8vGyPkU=bQypPBBqYublQNX  sL{Ē!=hϢ;x]Bt9wE:/Nޜ]^5sT&M+]al$GGOʔ=V` ÿ{Jw'UEŨն[Sr5H<RVO ]j31rzGFj3bO:V r &% xKd]0+9_PnDV}qnQD{sI,WR SQSFA@%fdi if8`==b-f=) *TOZW4#RwmZ_ދ|? "݋[-n, kl!򌪑@=˶d˲kꓢMH<91pIm%r0t^Y aF8W(˴(ǰ`$> <*Dt } b@C * ,DhI2ɵL 1׊=H 렣($ e)$M`&mH@8\5W kkVq aLKf,S`›RBѦ2RsUx !jx`1 a˂GҵƋуXX^"L Cnq,=(dPk<b0qy 8BK *?@! v:k|A}EXKNMl0:z!r@/#CbC Pq18-`kt:/Sf\WCމh2~GVǪW{q}]O;x;d)|Y\,vt"CW[ή^{g 2"°R`;ztUVLn:}דff9k[~ , Κ4 c?nuZZ y=0?ې_ e~?R1Vx>[qP`>(oE]ts8|5(1AMN~_,8R7SHb:7g3 4_5%MNi:j֓&jؖڭv߸j(?1>cL_\b{$ \e'>ZW`%}+F W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\=A}\1 =?+ZoW`=Jp(z+Ox Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(zbPr#$0+xoW`G/+5p"WmPp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B TpeO+4 7+/:zXi ^ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ W(B Pp+\ꠂ Õ1}z_ԭ!k溅!tA,#GlVч0EaʵkPևhzxlѽ W`.?0[+챇lf^b\^v.Շʼ,\wٳ5߮bc_DI~p? v73JO, ~w%= `%zZ%Lgk-=0 V3! /'LKA^5l:q>h\}q'N*p8 TT7q1p`A]«_*qΫCs.\h|!2o[*E1GsP.CW)Asvs1 Urw41|A,|>2wrJ+dXX-EaN&uc㼖'vײi:(q*&cܔ2"M㦻4!36Ny/|!Bt4F\DBހ?0W_Rj/ij9= W`7*+X_UVcWJ%G*59VQT W/'\Y&`98kM_•"_ VJ/ĎM/Nӆ>?zϾa>7z>SzȎKWվMO91(\|M>ìUUrmpr>lb W\M{b\cꛄ+G*LoU6&\]<>\e+) (\%\ek?pK W67Qk϶ru"% <ڨ-|#ySb_0T⫢H/Dy_=`gWQ@_e[k]ܓNܪ.$9E,D ۢd4NQ5 dX sG.[6mI ι/.E'̕10!#5y gseop%)J`%ai6gR䣢S]l}A\PW󡺯nx"rtۃ|$.dl,`tXJ'_rako{Zat׵NAηcyBHi\s8{H_+d0.Ep:RǶfѽmXAn4#8pÝ]on.zin73~bjipz1Umߝ/*a5m>lRxۭ[}9 {SlotO޶]3Dy]yTkWngӪKVU\}`6ao>f|wMg@Gu/LLP*sͪ ׽MߵSU֛?k~WiJin\/zLWٟ ,ޜ.٠]/~qӋe"0iTY;E aQ姀/k)~BcLgZΣOujjj~ivFԿi(T0IA T3P<Z\s:O5J^@~r"XU0cx!wzcuoF7;fOl?ȱ|6҇ɰ!8ҕ̑r,ԄGIPoQ7q :x$9q3sMm7D6pX{OLVب:ܹ*΂%wlvu'E}qݷLGCEBA>}Rh Df!d_RCRK\wPR(sxZS_qtɒpBD K/bvw"3~EK겜yr.+;}z67s&\l,0$RPz଎|762)aI)Qy$!kXُA4f;l<~kzx.Ne'T ˬ-abZ8&̿#m]c![w2gbr1w|tx4MgUAxɕ+ !(#7$k8/_(4/^Z͜T=i/]* h_ 1#l@()*{XxSr l 1PRrR!ee{kIyiK`+:0<9u/|rnٌB5,r"@ZZ7Vۜ;$5ڜ^woif<fIz> t=iBmӴ4[W]Bw'Sd0 exn˟.NjItle4wf Y[wO꽕χڛwäHot{~O>~=ߓnWDo:\sQūw'Лw|K~n~>37ͭJן| q㑫RZ)!B!Ɵ:{7FN/^-6p\Sw{F1={BEO=)SꠥzakmjFu͂hB^;`GPH%J傩2,95@s%f3gG>.IhOZ}/[`7}ٝ8C_%M^ Rd&"9"yeȔ%T"XvbEU+I1/ Ї#qcG<՞Jbtpރs'^ѬB/ЙsXÔ@O㵗a-Դ )]35gLs[>wՀ Ǣ|hyYґx 8-U0&p2/Navk,?5XM'Zm}lZyyH26._6 TrcA+#Ɩ1fYJX*3.EJ!@.%}wϹL*&&ӥe$c󃙳#^Gl\͖ CCk /,,ﶄ9;n#-r-E7w.|VZ&$4ExbF9Jx3ME4@k_,Iߍ.\i֒@ "R*(oTCH:BR\q폭C㞧*uBWgYwqƒQ8EJt`D%K LK@I ttgq|)zvHP#r0rxhq:hfguPrfsei%aQ##m;X6d@Z a>*X[xE$Ea_LRFQ*gʤE֫Ekw1)7޵yFiP%K݉Zv:(vS-Ldc9OA٦2MDwS~)wfBY{ϫS.'S{`xkޏ3|yd25Y+N2'@ɹVSN\ӤK?:i/Og" 2XO] $yU^fs2rin S?f/JU{t&òv~]Lq~Bz!`@2%׋OHtvv:UPpFm%yq$dHSєEg0V\&WP;T\+Azftݿ'l|?^̿x6Xť:[nWu6QV<pղrGz-ٖZo$.eöfc;[1Uus-摼FWnVy/; 7scܝ뢐rRVM 6e-"epYK? ~}? jԗ|`;}gW`0ߪ8 8ryuD'*)Ҫ&/EmmG=i5UA?'tPw+4+$:}K <}ذp|?wmI;qT/wY`{dq/Ւ`rD:s")IQMkd؊8zzC~w? w9҅O5S9Uy1ޥi .M[^ZZ8҆ռs]U{[%(\>bCF䆞Ǝ^1]Dy w8u*{/^; !+%MAT-$ B~tta.Mk>1'{>qcs^+yLj%KY nzzAA80 "4nTؙGooGFjX߬c`!S|Mܛ״u%is& ѺzѬTVdN XnC^u06N94dvlp=#zFJU'_]N DgJyGLYP9' cTdcKL0Q[c*I2f!mQJtmp9!@p*^ x\-qӹrFWi}'m+Cl~BTEAR.1C)gшHj AGi"gi27Veg҂3f8!VĐ\,EJB4m-FH-IW 8&u"+ fhi߮[}UH;\T3[F (*}<8w <0d{`RZ7)dA,1YNo>IZrr T5_~bg`c!22$yu($ Q͓42E'Cjru^b,_iCU %%d0&ѤEZCMZ\oP<&}1IV|QT57n{_4ʆ6Fy127[0yMi V!U16ޑ"xϓr:ctWڣ~eB18ťm+;)9Om zs>('l`GrBUGpY]窈}NϠ~x/iɶ##ZY&KStr\pc8.JNL^U?/jFaյ\_kKIv-_Ч4}?i#?vx[ہO)m_&sd񓌫4MLnҗdKdEv5of\/W=#a%OI.g'duَIp?֓Vο:SRy{cʕ]o&ӂfUޚ(q~.Z/^_JB:>ጵ+e}DWtS\OW םRiTiݫ4v?a N|O#Aw{޶8Ϧoq]fIߧߞΆc,S=fF"tnz::٬gv:l:;pcwKp{k}3WͲN:ޫNFǤ㌷;cYf 5ō\hwlO`ݩRYs5c(M+'яWAVVVцh'Hfγ2^CfwT|{-0p*1R%h>Xv*mU3"k1 ǴJXk2r XIm,9x FЫzEch޺nCTU!C"/lw-}Ů87f!q"-DަflڌEE I3WCT&g(@8{> &CRؔХ]d2v̺,!8B,[s4CAjq*jʨ-G}`AY<Bd2Y>>K֒_+eU0+m| D A5jŒ+/$K<-zDoKVאS7Z6X]gLwҌ|&N;Q ԁԁi0Uc F+:x%EX4V1ގ vmsi}’C\ oz=,voGgmm,bvsKϦ* @\s rA\TpM, vttsmϜXKun93x$wN9Q2Z[LeU !6DY&KkUK#ͮ&{Mh 8)@|dNǕ% NIئ}3l~r},Wx7| N׸ŧ3폻f;0kB{,7IV\#Tz-p*^ֹ+H9򰭮Nz߼0{,c+onlFlCV.~wrI>=l@YOIy*)ܠDHY%62<`@&;9Y.dL^d%_0#::7N8eGzYJ$ 1 ^9WM#:u9Xӣ}kyEHw%n>FIY~|)7N}Tml%76F#O!z,E'A?{׶Ƒdٍ*?xe㗑PԕĊhj\RlشEٍʬU'$T mE HTvK*L`ʡ|JRY~kJGQ RlRFMx&KPuYg<55p4Eg/ppL5/aeDo5%D9 6+3,L讒4+p59:D/%eth+'r-œ| w*aIyoj'F32 X1#22BdX HL"UifK-9eP}ӺB,HMhn i;Pݟ~סy,J6a#_JaMeMjxE0>xĒ*Kꃃ=d>7CA3@ed1gpDs :9'h #Ze1nw-"",y^vl3g !׍4Klv^哻Kp|Qi{-X < >e ɄJJHݞ\d2">K;O%dEt%{QG#=+Uk>٭kq7Ǘlr׎V^gjO1cW3yڐ#b"mK<*ca{,UUJG=K d"MTa-rdtx:L/j?|`yHk(TC82ŸjA 'bx bKadV@^RћgK$GɳLDw*]ZߎN`g4=TS SH+Bx.pVكgfR=KM?^,ؕd\^}+"sU&u4EAbRRj 4W2?&tkGcZuJʛӛcHge|`5>Djnpdk>ktzfh=Vp8`j k-_ 9/_{x:5h˙F7@^+sӬ,ןw$Rj:Q ?F*^ O9K;/`<9BПƓ0^tg9~h橝lT䊪^^/Xyӯ}߃Tx>en@ Dj/&`u JЏC7]t#=q>(%jlAs*L{ ܡE:h{"%cU>QAKJlѹKHPc2K]@ e{HĖִ5:^#c.FJIzցR1 J |!ĈK$u szR" r*:$WޥsQGBqcL*ȡ W Q9 (t{o X pdJf",`/':o{ey eټi6h8}s_e<5 >iU?B ^hFHҋb W+U47uU &ƺ$O2An=늞k*lv%Fkga5g zΞg;{Ξg;{Ξg7Y@x Sx=^OSx=^OwxUQVQl4E6Zz"(!f`,Gby8=\eu>W^j(]νZhwqiP3'=8x$G 'W$H h@b1Vft>'¢cP]>r$OO_mx-7ZW)a5,"xL|E춁&Ӎ e-D*bR޾6kYSaњjYXTGMgk ٳ"dF]U2A'8K_ağGyӇW}ڤed|a#F BYH0ơ lE?e[8ґkVE3P$:"C@("uH׈ch(C2 "D`_E'Uw%WPs}jwƆykK܉V`iՂ߱7WiƟۨ\efK22 >~7Ue\u݃ 5-ŕvei+\+3Y&5WcgnNpO.ɩ5R'~2ɔǓ|N3"{3D 9,mc:Rd٢YGӖ̻}SNsjC^}_5$Q~ۼ~};>S<-i^kI.~u]38ov⌗|J3Gw}=^v+?~f:u=->?r><=](63?Zy}n~qcpc6whueELy4nm`źG_=~ps;bS }m3-"a\h-W0J?'TEY}]wͯ~~AZ|d_~\Ry3VI=g meIet5aZu`<;ʺrF–F g G˃88c%o7?S巟~?Z~/y:$ŠWM4OQG=89a2C程B` h ML" O:EnTqIwzaa]V}}:7(Τ:ը? >`3[q24rkWM~n~}?6*Mb%,l^ۨ F_Zb:YFg,=z&HeѮR-q:&?qaGpdtN6ʋyzr8''ypϺO/Xiɩ gV NB@|NIKDB ~Ժ<>J9uvYGs. ? Yٽt }̈́gŎ4Ef} xL:L; `} ~DTƁ2⪣)YЩLYJ}Hej <4XfjXy=8]h :F%%M*!Y_'=JWj{҃+HelXɱY= rd= O" ډak}@lu"'ISܧvO|b;#-xZ_G<Ày]ُԞw/v44qB$H={xy EUʀAs>jaCQX _B$/JBN6gts dHч@T,"%jց52؀]WL=cg[`M/"#F& GyeH UǓ[[{45e2*$ 쎦M/m x"z[Ssŭ&oina9S#Y䛣giͯVŹ*Jb.>E!$ Ѝ Z5lM(eZvT "DT^LJD]s9j dF=^i:5}j*;%{V]bv鳴~~K>ѹ) RGuFld.Q6Z_5CUאl })qp[MG7W_#h]DM5U|iyTJ5>:.!9jWmXkS 6gRd|MQ.,e] !kvE-J`rʝVz@g :aeX6Mah .w" ,MN[*S)ޓ(biT ĬLLRMf{Ls;ߞz aA(WĐmHȧ]7!]#jYL:$,߯z8(Y))q gj~ mqb4-͟{KQZdYı\)y]g49!v"#PFxNX41cYY`6[DcH5GY5qvU̖QT:ܗD&8ҊI! 4I L ||.8}dg`"C%.b!ddHnuQ,D4OjN V5W+/lȵ=i J i2iJUVC )x,@%)UWE3)rXwrS;YkTb9KFwZ {f##RF,^5rz+"$[iPig}1j*7<11i|!BG>%  {%pA.E.#y<(뜱UTkuz5+kS蔦=e GD, Hdch|F6rQ|^;40.s㼵 S )lM Ő%橊.nlE!ulb܃%4 Jc6 9 SPյ;Xӆ#ݵlM W4eRכ>oh}J|J[';:wdeNqؚ)zLZRH+lߋ) E_`c8Hs75R $EleH)Ci RnJN) 3rϣߝv\C &VXOv R xBX*%uPFK|*l`VvLn_:TF)&K|#&ۖ:xkeFp֛,1:LYșn Iԣ+lOz>./V= yXEvz?BLzbJVX"@/40 ; O9Y-x= Ptoٵ0$`Cd`%f5Qi 7ܲ1f<'C*U6) p7o\* yWrb^fJ9V_'0r(K-Tv9+ Lb`vI)X ;o; j#O-^Z + + 9܎KfFDƿiGKL/dK Gl5O>|( rٺ0z]^wvS`-S.m4 drlUy#fk+Ww~]d6&$3ale ^pخ-dٔ:Z$,滥N$v%\7n|3.םt~l+"aq8o;m(%0YoTgesq?{{¡]3yD>Tpiyn_Ā.d?o3M&ߓsAv|õ]tNuNET3#Eu:}']* ,R{Y?g کWsn]t]K}S|yN%QccXw5{kMyVbLSΓJp/R|.Q6TY4$;Ym<.k-fpL{teuMFνa  : !%;@Y2zU@h`M[WӍ=k5D#jV {U=.lד#eE+pBUϊk"=_!>UHVYgh,@$ꂶ 0.#je6XS.C> r=3˜; t&Vg{!OkRm::.=If\υIJ"'%r"7)eth qE@"l%}V|JE!!*c !gӶ-8-)~W.ɩGKKqb>|ﰒN۝' Jm=[ЯJsUT޾QAVduVB`Bӆ[ȻODX( 9r0eU&JeXZXҤHs>HFjlGzJ5,{a!9J&7`!4+x9}f_f+_ɧˋ#6$ _DAi )hY&%46cbQgQCxĺ 5`'dBRuSL؎Y8GEc+#v5q#vZԮr@fXip$,1LO&rdȟR\+QeU0+m| 6Er aMB7S0!eEHA5_e5qa/Yc_Dʈ"x -Ef\(ΥؕTU"DLH9Y7w3 $wBH \p&*57()`7"I8[%Okմd_\ԕqQ 8⽤loQ9A:4#'fnP%wRY2B#D& FÀža5Me<ܳ lKR"ѫ#6 4 n`?hʞ؏˱YNOϟ@{Do]^7 tj &x&yb Gwš^#%ڻ5MS*gb{9^ƒqؿj|+^~w{/eu]@' HNh-Sv"`NZ%瀉Xmw#ɉs5d߭j%ϡB#0րepUWqA_ˣVJ\'dl~)D)>N\>JZ KWqR wv좕L3Iuɖ qVwno7:Ǖh]V}?"``!BSo]LXt/yZ^-x:vTWu6yUsYwvhy@-KIt<[`?umQ&ѹV'kijz]r:H<u3_cwYS}eCo7U\cW`@&+@9kQ>ېn !2eϻJݻL֣LV)-cymn4*)ܠDHY2D+1}NVz˶;Su1/OfAGI8g9!Z:HVyLWz&i]E~Sv"k3c"ғ&M7,,o||*79.]/ht 3D$]bZs`1ZxrيYL(9+I04j u)B9ClHK髉.`)^ ,2˾Z(!QRz޷X!pqbQVeMBչ^jUd ,DN_,S h)lXĬXv*qS[ _RmWb4/8Xl,PbX%]Gr@3gEAG Hd] LƐ`c=)ƒW=k[;AyW]qbU/Y v/{P/9(mqo++Cc K)|9>:˝;[#nύ'_j-AB,F LPQDyj>1NdǺ^nmaiS\. %l.~}Ļ% 2RnLk -KH*ĠE85cx`Udz{6PgfN/=voHMW |5VA<J!9"sMP5Y(ŴC@G*dkz~|DG:, )g1JWFkt $Rъ|,%l9ggoI[RzW̔xݧtasi-Gs:Lr6ht[^4xi!}fj0v#~>ਜ਼9\1ArqNj|kk Uumݷa\#z[.N_vp2D|;"~VA߉EJc"!^fp0qL(p0i<\=DJŘ\jߡyJ{BpEky:pUUTH{hp}<\)•d[+X2~2pU}`QpUpU> +`Asy}Av{4ы#1F|iʹ4=dn3@ zec!d I )Q=[ĵp,.z :A/z ۜ*Q| *o)i4Ta ZhDe8/꿿Sӛ~-*l>No,P;@zc1[ V8DI\dGLHcho4ך"ӧO7P6J d*e㳲ٮɵbeQiQdZMbݶo.0ֈDc ~.Y֗O{1[_mB Tɤpd2ىyJ0$RM.G#Ì2׍b ,耳c.Vj91%c^gPhKQk6&õXjx_ lJu֥HH0ݘi5\:Y(k_.-Mb@\Sd1Lj62dKd!xوب0A*#ܥĩH4F 76w/JPVsQ[e֕yXW@NSڣ)*9)h60,/%9)tF1iB0{@SlRK3&tB Gk)pM8-|,$6b>EfٹlE.ȵV=@ro'FUv5;kyKxOK )<,dV+C[]ڋ"v9;kR(eۊ7kSu>OfN܊愗 87P~z׭W~3zK$Nޛ^}nEBVq]J5 Π|z;Kd,4\9(+>p@&+T@R 9E[\ qH$F1NWzwvm?ôgs-i W=ۯ|FC: Ca4 Zi"h i#H|r,:,1Dj5ez|,ymZ;%E.3ϵ R RdѰS1qVdGA5&ii5l ULx3,*5IAH/tj|]+Fi2>q_GڇB5x]++X|C5Ua zKss#3DFJDh މ {, cA$NyQzkg֛˷ ÿ͊/q+dB{;#-ZƐ.=5;pF?2@5bխB;^Ŕ^Ooږ+| 'u<uhu*q)T,U TVdY'c LmѺ,/"lD Xp>56 0&#\IqּE5]P{M!$RU6GB  1ۨ`jTA73 28x9jc1Do2Y*gb4ј>lXAgWik}ʢ(.9Т2<4+ 72;" t,,]7.'='D9 LDx;hfen@QpF_qUD2hqPk]sO2Any_*냼vnƳH g(nM1L9T=)ܥ 礳Eg>j=ibX噶h+ Б96_Z~^{$g^42ZB`jƠR ^Eb9m9\ETIkehGD&82,)"KBRf.B::H72 8Gaט8[2w&F/_wj4~Vtvk@K|zRS?. WD"tY34&+w1iᲒQx̉d#76I)l9(,W,&9Z+Ip&Yar"P%sMhJ5x%x4N:׋>fFSr31K%S^nX-eo{".cɽH"cmϽX\* ^|9 r.wȑY<}=એvxn*0PYЩBc1awQ<^mx|ui$eHVx)Bdp똷.i!kltD-#6;a`NNd48b^H3U27̗a<˭aȋ3C\^yRnu~8  $~\!OͺGgc*u0 DA%ǫOj ~Ob 7H5"@b@(q* *bz >@%#ٽdy[4Γ$=`kmi:*e@ I^g+ozv1OںЯou&-jo`m8*e7 ?PZ:=ݿS]zPlp&:64JdIz3ߏwC(ʅW0kw] +%ZgJ$w=bQCj'Q>, '(NRz$'WL\7KdkՆd~?4{^m^h}Uq*Z˦I.jrU]CŴRݵQxMk9-!iJ3ޭ>Mo/5X>S}l0ʹt_|?;[io풃I,^E|kiw[u͈XC˓-)bLam XVEo_':g y'׵mVkD PVRzA?? ݶ[|Tn{qug8ǽ0{T[DQ(Uͽ>'7LܳP4S>?^9)ɟ~7Mo?\?@kh\&)LDXR}EAC7T޼ii`5k׳M.iw%SdeM! X6Mم%e#F;$ꥧLʎPC6Yb*JeXP134|IOp֞?thyr?3:TOD}$}F9B!fyZ=8Zs9B(9B_p~.=" ]C۴%mUK٫d, mj´:ɽgt`-t.8jߖVWlvrBi ,%!=M'Y7xD)";v->-=dڤc ug*QNHw3C^d3M>')Qmgz7s}>(@/8jK\mup{zĬ9d3*v^ $> S%<Č):bѠTCT,&fZ>罥Behu .## k^9E(1 " RPXCaÓiƞpg6!dvĄaQ Mi7Rٌ~4K&池v1ya=j vk' נ)|BG{%n"3kYL4زMQɩ\Bm`h3 5.#ir1ˤO-~f,M\e rR\(GuhR<&<2bV:.yh\lDgb 9I8w%3_d'bM'1a A̫AT+:f9 VZЉ䚀lr&n; GtDQU(訸MV;jx)KAFAXu$Ir!DF]%u,AADrz0b )*Ҝ"Pڟp@ =(ɍeiQ}oLdѬw pa)$\L9KR,F<B,% 7e1b4׳׊QB4%qGJipT\ET!2c 4}.Κq,(@"ؖB`IAL2(q Ylg^lV{Х\ܐ RҤ V +(AqwpnXTGh(>qJ*PTn^w@N?5c"ly8T`F\,sHP&zUjğv΄!֞z)Pvy-' L\Ra;Ą8WXL&vz]"7S xH 49I)&e$)qYs* 48 8$hk[M:90B"rlʷzDB$Tl0s& 4\bnH"u.wr9hK,h`)xk"-b!TG_:fl;c*iKP5fۻpe\e5{LGq\J-{\{vT `G]Śy2߷RH Ha45+Ƀ")'4ߜ޷ɛOnSq_ >_~;BKY|=c/?uz΢ @VzьL#W<î|p< :9kɫM~ڟ4 }5VmwRNJ(xM Oz 8ŭ5=?w .nm_ͮ~LORE]s}>\`Mj{_ebOR٧szXowU guGhظ8[3jL5'_rnɍvfX/FԶ޶ì͍SZEhp,*;-  #h!~_8E w¿kFikrFƟG XsC/ 47K"g_V.[/0 g~ ^ 'i8Ÿ!0S5GP$eLgX`liMK* ,u )Nfmll/y@Nz Re ۢ,|XH,%+#&h`QN _'p⪖w { = oRRaHGu"4>7űIZ%)5 ]\*zO>4pv ǃ599_S OkɘKϪqқ<_KZ}ykD;\h( Ƿ:"f67;%Zh@rFϵ,wJÃ&|MͮG01t2kR6H,3Յ.3n:gkek.XmWmy ˡ݊JWڷj].0oGȣj*o.%<SJJg Z 7\ڒ}l]>y` yb@x/FVP߭Re|b{' b 5P)6xiұ>YsID*߆T&Y)Uʮ&X++Yh}P&TG>MdI4ɢ%kKnD`e"EMD_NWEB]ܜcZ ;U$jIJҭb=U,&>\$dW -G*IIIWk{w7R.>hT}umO姬`V|!紏MC29Ω9oxgbbWͩꊤ]#RT bl(, tl< &*U!CbJ]M o 00Aa9K=:r ;!li0+(Sm?b7vjuҳ=E.{d0 /GDA`%ѐ$FDa@0Hk#χ 6W 0?=I\!J:\us+4S xA\f1I]%)yǮ#\GN1:$.9$-;$eW4p~L Z UdZ$gv${:pv>EZsWXGս28pu?)-LNtWv=AX zDp+* \%i=t)u`wp|"1+X\%qUVC$9C2rDp3|Jj},pbrQ1IJ=G҄XYH⊣aWI}Hk7tpIQ;v8X\k0΂&f:Kj / Otޥao2=Ɨ.ĉ?.)1grA7+-K}6açPzoYcQaƓu~'wozU[3+ڻ]хy ?iRmPw-':)`&㴡b Tc$Ny&H "m$TҠ UzrF5! /Ndn`*'Y NhNKJJybSDp d>/a9&\^ vY?Fs.IO.O!"iH9#Pq+5AFӅ )ݷgUNK.x@mꃄpr:.O?a.۱/0ݸPJ:)C[r6/[HT4ׁr]{tŰ/+_%d&P[(Zf㦃|,D=`2 QZ"e EQ`qp\aJҋse_) *fr.{c7/)(҃EI+PR&-^E+T6(҄HXZ,3Bc`)v0>8I)Rg^#SӍӅգY ۊ7h5ilvvURhA_:V[B|n8_$-vIЁaL`Zs} MS¯)dz"Rӫ.CeouFiP\fj@$(0g|h$3gCt GH20YbgIFDe-ګM0F(* RaΕöU;3gI(aPHdDԈ H!UR a ⚶lg b`뇰&&` !pi40D"cy`Fҙ@&pT8Z%.AP2 G!+SIVkT͇gǷ4XS,)&e>ez &[ήp*VQ? ,1XɱELR" ż5+"aS;h8fa~16U1O"DӇZڄYra6y>&/b`Nn܈"N$6Q¹G1Rh!%:hRpZ$NDMY[{3" 4:zv(ך50O ߐ¥TCg\P{, ,8đ82 D@tўy,XƓbiȶ֡['A/fi`ufس I^$ɫ$K&=9SZʷc7.ݗeh۷'˝o#@+ sSe?FfX!ec.{ߡ )sgt0_OE"R.,[3 |2D"Z2…#^/0Y=0įC/t(*fͯa]jXt4ϊ@Uac}c- {e4CF`Er,s2Jr?c )T6J!h`r+g@ )Ǜa8nyô2D1# 86FU4 shw:qhn3S<5[Eq@1X11J 6L(}aM)x cʑd0"gR1T)b+t@Hm ;^s -SĹqG*My4hx0,ӏ3c@lͬԿ7 n@s ִZŎ~d\0-zvQE rHf|?vZsK spQ숢fBUC;fGM/uޝ5 cF0fiSeF,ebT X:TW;|EjtjQ۱&hҺkD@\WM-#ovP;F/0ކ,Mm@A0.aDC:Ͳ_C#(ޜXe:4JW x5w6Ӂ S7kxHպN4>N[Q7&U*UhۃmǴ60-ǴTo7#W=;~je3=du*Xhen3V]O֬uǰ6]}&[{;_p?;S>읩[ɷߝj{Lm4 CJ0L" DiedV,t6ˆ^FcD2YDꥦ0+Cʘt &sIESk\闯Št0& ^p`W{3|nsӠZo~#2ìs/D)Ep " 9h9%%b{q)  س<6C BC]s Rr-"#it"P씥RbAkD1Z*v . J%T8YYOmLIY,- %0liZAlC߬-2R^΀ɻSvs+Rl&Yzm^R5]a:0{c @?E5;rpLE#☣4Sn"pt-3AZ!ČP턦ٻ6r$W|`aw7`pvY,:8r"ٓ,_%ˎe2+IClnVyȇUR"S@CJQVk_"OSzijOnHIxđ':&Zi]mNaŽ"'R )F:d_M<zZl]F=fviܰ>/ՁF'Xi#cOaa|֏xfky֌`SHtŮPzpgAo׻gzJw秌#|74&(>.c |$'_wfx6n֗71/|>_f8i4(;j8+mŜZv[ad㎝S *cQ0|؛-Zi#wy%kЦ8lhLxSaG670 +7:/tf 8'fTL[ŭLึ㏠(ߊ=gB)Be{SsT9T<#s&EʖضF E"g'/QNF11$CH]%Šcժ_o3sΐ}Ls9_\,˷LQ`䄴FsKIkir@UrdH(!璍BԁLl[*NdLQD2{#]9 2#N}nY)qH4'_:K_,>†-w +ɋ~%kk]>s?SϠe>_QyBٻ^$62@ fPU, W.iddTثRF|YKml$ʑ/DQJkb :263gwdlUf=cX@IMJ"-Ӝ~5ywM% oC.8;]^|dXD:oX}N.:QG9y_( F t&;;l[*P{] ?JbVBaS;K': ((To;byޠ}n'{w><5HDZ"!=b*Wc>3a<#g^VN0aCTxXaaȐPj5)^'( \,;#jغ6d3svÞԯT50KDl?qD4SD"wVEH{&+5՗v&Z`,Q%tV.Dz*t[N _2:QƙTwr$hbd&  [gGf쎈^qqҏ\g^o\t㢝eGO6Xt >'qDD}NOζݽvtcф!EdQDrڂ$/e%0^G:It.E 9V9 *9 18B2Abd"ȏ3 oUHYs̹ny8 QRX, ]il7z(ЭB认-;0*H(y:^AM) $i |{dr,6K)؝t\VW}81X 9J<4S ˌ|HPQ IS(( L>vD> cE~Oa$[:xd44ӁRme[_ʋ[?~]>7k4y|gRvI9)o_2o:pnEۉ<Qp0~F >஻k9\9_:n1op=KsCWفǶ1OP\)[Қԇ0LawU*z(=*Yjmo)_|qp:gxsM|Gkqj}XkWhqV t~[fUKq9k?{}7b#GEA ?_m:66>C^?V ] Lذ*ǽo/#?!wuՋZ sq//0O?E/}YY77W{_((!ΨYLRIQXNC*( '_W;]ņ?a:fdRZE)5JW/l|̴D)_ώ"7>shEKBgΛhPZ`Ҋ|9If r'vɾli_ "`(F5Z"L6{hXDve2%{bsZ{2#/-N2pymOz:@fCP9u7Y( mΠBIJ) "0wV3ꬒ+NKY@Z6eN$bpt!DJDO"PT<*_I(eiMh](9=Vl9Ybl3kbH#WL򉘥ܛ3>s=9]^,> }X_PHmFx:8}tv,y|z#I#(> 颍N1wLQ G+E CB ,)# >X4%n9%ضVrg'!1l91UsD"yʜ{d@^}sỉc>74ldXn+UT熱7)Qb+` f«OW(U\_H H:Lpd1ћ&a"L#&$LV$/f<%.Gd |(Xg('*4))U3*#Nʔvu4ɟG6v',=;?y\IuɃ΃䛱pGCP sjR~d͞[(IY>:հ}3Ij(P#`uA^A(%aN%Iى¸㠃fKs6M۝>_hV')up1K006Ec,k r^9kW ~OmǶ +GovP&UBEǵ6A+T,o@Ddɓ#u}|24hH]dmc&E_*GfS\<|n*k-wT&=yZw=PXI$fNƻ؍ )9xHD#FH3~+K™|mb gE%Kif*l.yIE8-}C'!d)S9`@26&LT J1uS܎ tx6;WS&$,̾ljګ$**Rs8tٶЄƤv pq2R0y-y;_O52SQ?y[=xu9=k~q F*6gseX[s` V+_N?GgM(/BzZ͞Uݐhz@6ŸQ0iI>.FD>=dvz]1V@6Y"pH]sJ_^c lm_CgH_y'e&VPřQU܊&`_. d> [ӨNnlXԏ37&'o^}xuߞzurɇ,e\) lEfnuk]K娛 __W~A.0w+ClV2IP_q&JO"z_s-9; P8bD82[K<2A+ G$gIOmphK^wǰfZ.) 0``I-gpDFXvsSdW;] 2 릷;G8|г_˰w6; -xYU6sc̱|z1\z# \Kr1Ё7}ه[cR;H R6p <M0V>)6[bk%_8F {@*8GT041[ h6*Y>ђȉR4gKr{tP .@M$)'@y\" F{Mi{Vue ʏTPr"# b<>Z/MF<״wO{it0:]S&d@$F\ Q(,rb$= Η$.-*+W2O塞o3O8E,y%<2!7hzc"Xޘȕh_7&jꍉJWo|ֽgtԁ6,$ Em +|=U (6'Ն#3J0s#e6qIIY"<" ґlBw L;UX-zg՘IDkq*WhҒtuFΚlӋ]dEtD10U a<.Uk48;W$.$Rt}^dZ rJgky/ieJX9N~RȀ.&tyty m?Ufo }{7(+kn}zt|<戻 7iFƨ f;unɗyJs?r9)ҡʺodJd\bչn>\,qM҉|22E%^ ]Sln09lՎRgK=R}w;>޹QHZ7J \R-TkkTv5s$N(A 78瘕Qwؗp۵n3>&{iXq3bjB\8̾x'&Uqӈ+& ]D8%cc2Q<  n0b"{."=AZ3E4!&Hn)Ǜ$!3r3$hPwX /=W<%+k*k,,u^w6+Gn0` &ɾ ]wIT2ֻ3<!!G%"y>ԻGuݳގ4PgAк&䆂NaxM܋myQhRe-׌+nqO+ {$'LVgTB#Nx4L殩8VlTlɄ j:Rjsfȵ1O,#-V!H];r<y,![nu '1beP[&F˻>NK7`8ac`8QIH!LH\W\E\%j9uqzJ2P+ X"7*?*Q宋DX/R2O@0doU"W}WZMw]\9+5`Nv{#Jﺸ*us/)h4J*)MEVHWg/UREȃU'dHI/(<39c8,P@T.({7F -\P o;Xⴡbm4-XIz祶Tj FB% $\S%j ԭ.i͓`p]wd6 Zr^q޼c-0gR0?xm&5Ln#Ytq0ujYWaF E`8G|zHE82'.{ 52ja%UTo=lzj܁Ac-Q*ttaʳ~1b3RaIm/o7[m; :姿_l/ۋ뵶欧,T0L}t0L}t0L}t0L?u4^Sw8yB˕ngY)rL(Ι2p&038MV{mcd8C΄CSz𯇥KWG(4fd-o{tj$~+|ۏd0,kXAgrFU.Bs+_T^%H՘%l\LܒШFpucâ~ 6}8ysë~՛'_N>}g."aalJ[[]v~k*Fu9fw9z_ ̝J(E-3x/`Uכ\H}cbYˊ2᲎E,B+޶j^oB'rLxI[/f^η~(˲OW43/nS4C5δ -`˄v=y&꣥2:Iٛ|bx1l#,0ܨd8M!;S#mp;| nNk[E:\g+/ w`́H&0"4Vx%JF,gƈг*,ЧDf6kJoͷŅ+TxuAFIjXoO>zL _1K&3~=oGI^5~jG}=o"W#:f8QhGQP4,o,cIZYR IaG' g)Ap} BVT5tHXreFLՍ#EYXf1eHчREf)9\6Y2 a*G3|6]d ñ~n 3̖OmC[poy6Mv&vwnvԝ弝[[ȭ&=Iͥ/qˬ5Ac.!V4bvg;&]Q6gvBȦv{|3x]k#t22Q7׼o ^|uS=^+z7wzFj./9t gcs{̷\N=?~9rݾءCYQ\gZJ`9rl"dA,.9Uم ԑBvomケ P3>d3Uhet*$K]ZT&M34: 1Id]43 }C!>j P+sGpǧow\zDøυU.)J.]RI y@ySPQ1p, :[Ik^~KEU #!p_)0+j5qv*OۜS8t {ޫɁ{YI=麟lXo /*!**vCNDy&G+U(<n&vDQ dαsUd$”ɪB`Z䂠%!!eDlC7bdiR|&ٍJ5,a!%^ WXx,,6g~UR{.@>|4|?M>w*)"E*BmMx2(ZqJe9D 9+P=!) lJT%Vb;f]ʡr(Y[;k8;Li-<ԮڲGnxJ@,IY Z#2nc ,UMDdpk)I (`+6U0k>M"dbV)F@XMqR&cxXMxOǡ*#Gm"3.ZKAld^5PJQ*ŪLz"d#E;Rt>Jr) gB *4h8H$̥ H\Ev5jFďH:.ΖKYMKESu=.n^iQ;t6F'fSd˝S\lL @F! gbqx*xXM;Cp*wU$>Ӓ+ql>|:|>Q!u1{$eRbwl'f<=@TxGwű?_g{ 6ƛi𱹜N>}>n#`|A&cQ P:܆>vR Ņ3+Ax)[ȬGWޔ4ysWLZ12;]ڰ5S^J(DRx=2ȔBG.V:0%}(%ަz0pu>q :~Hfd>y ~qrx,J=5XhB6s=6N;J[,9X9&iOb'WtĜPSiSU9L6ZgNgJs,2KX֨G˄M1j˥`  %_t`,J. 1'T$'Tv>G5qثAzBPҖ0ƺk-eЍ,dˇ'*NQsJ@U)gD$5L* "g I*?Vn0XpogogJH$ %iZ{K^"jIX {l98I[U+}`41b6[D$'de;&Ξvvr-5e-JOI{ 2E`Nz^vڕ0mokT*7 V醼oQZYO ޱ%ICOq- iR43WzNJ*-#ƒmYzs>hU݊~q#vYcώ<;JC>[L*"H@ͧD\]I\f24;C)FBP zffd{ diC6cYƸz6mJ֢n-q}hhY}z&GA'|`NH!az0m- w.ylzg?E"-ݵ=0[@3` JmJwt >F'C T&ZSyۦp%)4F{Av8q)WGQGnEHGaT(zFL>^jr^{x;hio6 \zeۛw<$")F)0=2#\⇥;xt~z==|o\ q2q#Lqb_% c%G i:O_둧 ǛgFM eP^ssFҍF(L 4,8p\ hҽWSJ!w'bަlv_Q82KGڑ0!+ ^ p1$6FW6rJ)p-S{"[.Pzm,$Ht =?lܠ'ΕLٍ21r 42c(}Q!F:gl6N^G9=9~eW-̽Ɣc\ui46 ̻M 쪵.k2n32J{0+ :SN%[z$_=Cn$^Ϙ\WbmfIŁO57D0~^ƫsSމqm_2_>пմܶ6࢛ s`񛌫 Qwt}Kpyv5pˢ ¼w ?솕+?&y۫Ӻ}qỲrarssW>G.!wLX2QmoךլLv]XK޽Ze.͏K%!<|$,rZ:>DJw] in!o0av83mJ(Rު lly X97J7I,Gv\ 9h輠%BpZpx_s뫒:lٯ?]r29?/ml̫-_`~y{#__CsC`ͬ2=x~Z S_Yt*F"o>vQ2"}z${֮ʹ.!@m5 ״MRNnKѕ/0Qq0U++Ҭgi=KҪENi{tQTX;8:ei":u?N)[rcf_Ynp}'vӰlǜJ p9^wDenE*p,}uvhLx jbget?o9rŵ;a;ţujzlt~{w疑7bP尻hJ$!zL2?{ƍ)uhU*ٳ\jpq/IgIɯ߯1×hqd8q =Si5*$_\i<w7,S<Ύ:zg MiDɉE mc^G6,D'2'N^!LܸggqgFq۪ 58J}GA Z,x{lCjrI%cE`-ӵ͜(g{$j /q/,1K_j{,əTAL&TbA2ڄd*ɧe0G+Et U.ca0j4N#Y)Y:=R+Va;s7T[{S|_ X%%AqӁT8`18!u.2-$Cy!*f!XxtvM4})qj+ ihԤѢBc AC9:e>]bהPpf!8,4|C bcTN}I%I/PлRƦ3*RD˛>pZBYIF D$lOYT^v"fAԝ YR[Q Bcc%İ uJI͜Ny0H%ultlB o*sOZ~3a ߖ%ԇ' {0KO돣|2mkI֜-n{Ve@g0G,(*IMڨU$F y^o'{dGA=67t3[ݱt&η*Ȧfٹg, 郴S^1DN!(fVd"19͂!O{r뜽gCsL{\ubuԈt8?s)!3jittFѷk!HՆ (w/YOMY;B`NB5R ^Eb9mh&湲kJƠ"g%DӭsO8şV Y[um57ty 3*u&O>}G&L * -9IzdJD:'Rj=ܺ$t˄W[_~9$_IHɩ@>qm9@-o90EWܭ7'ǣgyc3rR̃b6e ɽse|q[^ʒ8~{QL_hG_!)/Fi-q~{zφizRʅ4tNzH J+ Rѳf!$~׺l;?L*I2:] Hnɭ}HKytmIac#~U5~m{+5Yv5ZIx`=j̈́b0T^E^՘Qحh6V_9h lA 5ȑ|ᩫH eQE'ePKeINgmE7x׬FH>qe$ҨPQ0:+2d@R0`2JBκ:;#.#8rmNNh,yB3 TDDYH(Q;,E-rrg/OU8?= L x8W,Am?J?׽=[%zݗL{b€Eul{]ĵn[kV3u޽~4$"sE֨UĶ"n>Kscs CBwegf2ƩU hAm + 7F?~MȝHosלFJ9Ny(k{5A3G!P3 YFOphx8 Ƴ($^)-2|tvwWܯӋRYWN6݌}7k=z>2Aieb`ҰzWQt6 6OeLTw<7?q(3pgvF:X jA6 Lh]3Ա!b]ʾ#h&װ:[z[qG'i-z=}W[!"sU|kUױm1W$dzU\=Cse l%l*⪭I0,7WEJӛhu4[dH`'gk\Yj7\)7W\9# ;49ГeZF|MKx!oqPr0%h < VG<2 _"hy|}0/v5:댞q +7?YNr"NZ8[ E 8s@UfF$ؕLG9Q\NJ $J4G8>y8<;aq5I,OyM0ylu ?wE/?&1UXc9\o,(+5"x/HT ,viő NA츤Mr~׎wd"FӦ!hJ}-)m6|w|b@BG  #B0:Ġ+I&D StC7vW{fMߧꭧsdX.S^ТME13reuMk@hGtͭFN˸.)Ƽf#Ch):}P1!+B<F'0F`d0X.Qf gnH#"$-T#ݤ!ܷc[-| (|2D%'#xN\=6jP-sɴS-)}[7!8U~ G`a%R6H5.e-TshRc8j6.dUsU"໼u/YWzj;p|vDJ~{*?߾:כ_{_Z 2T"s-:ZMkU޴ia Me'/|vU]^?KpؘK~@`G-l/H7Iez |sP ţvL%Ny/|P*K+36J{?{ƥn&6ƌsD5%J䙞ӧqR}߰CSu`DXv;k{}q/oe\҉U  t<#Bc9)3D O!}L){XB77h +&lӦ^?6:'ꢯsU^uIMS'J-gE1ɱ>VJC"U NR(b0Ȓ5Ş^~e%s* v}v)$>#/iB2mY2(K/uJ.0 ؔG>< rրTiUezYM-5 ' wiOGnrلŖvw7oUc/)nn,$f7tw=>}[[Js[ Vi)z5g]^?pիԺͼu:j!ũ_6eC-nnM׍7w%Fۢ祖d<^pgf̏Gy(^K7tܕgí^Yq54hmRfǴϡkf}ޡC1ї-6"ۼ^2miY(lFhV 14xvTD7@6%6lAe|{~D{~,LAK-D$o$ݚ湸 K-z92iP3cW(CܤeĆ(lT@] YWUbИ 129 @CLH+8$M gBjlV+Erw(sю%NIەzh:Xa.eBYwl. MP[zlW+>}% 5iKnO}\Ur^R;J\%7x_rcO D6zX)V*QEcȲlDiIgV25rQ}ye/e9Ȉ-6OV,$ø\)6QMähHwʕIB$1˹ -@ҢW; S-q6CUJF>_NsZMMSN+V ʤx6 '#!cp1g,A@0b)*m9 NH5.h_zVq`LC.͈>Z4Zl4LO圡YTKy9 Pק15\Y4,ΑxeB4L6)iO4ޢzUgK=;l"ߚJ%?',0)I YgDh;dB#3'X ;4h{/ew@N?g5c"b1sbhCTP'1 +B,wDhLT ! IO;ɵZ{F@GvKA ν)Ti8d0z9(x쁣 pTjc$^MV4I!d,q/gs*Zz 8,h(D&k[G(+|Նȡ) ¥6Q2\H(DŽ[N@d`*#zˊU!hGvaEiXȤ&M*hD[8$qOg4}Ʒşlh<ը!+⹱ΘɆfr42>!ɍc%7֏X?ZrcP1'o%CeIG+Yc&:9rK&*[ц'g3&%5КE #'-|^Р I3ϟe:89ؑ"8NK mf7"sSb\#YwoWjm~EnѨp8yfE'۞i=Rãjϋ 䪤W]D 'n%xtw \y>7WszW R4mzPsK\9%T q:lYC>wR;( _rS>ʼڢedjgX͢~DL=q̠uE1#7Z3m*"G8DLe,ZZ 08c0iMgEA%x5h0,&ksB1tju(}*F>FT^Lɇ-ǥ{kZl׿Zs/㙖ٗ܏@v2[=K2Lg%IBJ9&= <$гzzyiȣAYSٹ_E-Ie'M@i]#O4e!x  >I$V }0IvRqÓbj KZ{<쵾_/pՕU֛bn?jB6 u/RT۲;\^J|;k`Kh"= t"2=1\pRq3Z$a0Slot-mE{$R|̅+ptL5|gR.1 zSv_:Uۖ}{lzDMg6*7.g0-__NGgdb4mҝ\^y-FÿAAuٿ?y }YsJf_KcG">wRRԊL ń-/ةd!j`+4|GyDYj'ȰOg<(2r $_`)D+-U̧$^gP [v`ke9BZ4CaWsMstcN؃^+#0rV;&䓕$ 8 4;W^O#("ڠs|:\7lMyL2?ꑭ͍4Vkr%-4ߓ%}hM>hJe*,LB)Q0PdcJͭ1d:[ -Sr))a)E | { 8Qhu#JJʄ8[CZx@llԂp,DpBFxΉ1戜emIktuk&dڒWI If#w 巘zRͅ (/ךP;h-qH=i| fX期lw7^NC?oиpyx:pƈE R k^92(19E$ bNEP60"#bș ^`S(Mtw2d1c3Ĝ68ۏGq<Ԯ6:vEmQE=]I ^ʔ|&'{ea"3` ٵ< lAx^TO B&fHj) kb"}㉛P FcdT'j&f;<3>CFlrIE&:v%+lU&Ag%"iR^ԵFV#D6 )F AJ HdIsi@aeD&fD|S{:]qQUEbkIJqA'i`-Ymb9qADT8l=.]PW6E9rjNs7q 'F\y?vѻ@j;$/ڈ.z.xp+M޽* $~v9/%NOC6 mL}28_a_i%:$%;E+$YKtYKnsֳܿ B@!?'C\ t. t<\v>1z]WcņE:6?^%1yGݙ-.nP95h:#~O|L7J^Oo h4.\dQ{!^ޗb*YdjL ^SMD 5QL9o8K{^ _'r^h*7Uip3jUY[ə݋/%;ʢe"]R>u=ycu8Z@@-WJl2gVa%"eD]MSI+ܷS6al/{>ŧJ52/&6kE瑑e$u ;Ivܜ܂ل{7^C6^&h;./l`ˋ0M/|;t?51O=4Q3DI!Q˭J+u 㢷Z8i.sT9AGͤTkzmM[0qpUBjمo=.YP_^ ^6,7"[:X/'om7srA9&uo<֩M-H6"dΙR$[[jù .hSkת"V|-$އƵ0#'Kߦ_ 3Jl w1c:xˍGzO)15\ BlkK S/-Vq,WFFqqT 2m#NF1$㭦)kׁ ~unGeccYʖbL!nqy[ ˈFQ_Z1Ʈ»r{";iewJBgf~;*o޹(o|u_z lİLI9eָ!;;\jn`F\I[Orl59h8UTb)U94bx#|J-Ucܼ,[' MG,W..9_7J/~Ogv4Y;QF0s ?;9)gRSؘSsyqޕz&3h:3v^5{6ùS8p7/P<[~8=.3U1IB&ꅂN~@zQmjDC֩0`tOts_ k3K3 it.C)Df1\}%`jk4QU -*ל{jfmzduvݧtEnnGoGgsmmO98ۘsܜ[oG37V~^JGz;!ot9ޟn̳WW&'~a*(մkD{!y+qvNPXR+iI]15PNʖhm;h{NnUh%)[ɔm,HtUPh-:^z[W*ֲRܟ[m!fA;U}rMa*%;H7Jr)" !yQ Y)I#2/jQ4cN>$UH %e@yl);}GB3!RbeTT {"L.V}7ʦ\$Ž<}Qbhl1\V51@,@9$^9X`"olMHa0[T/XNq1ٰz^D++=5e{f1Bs3vB[[6J*{m!ޣI20GMEze N-=jENA6,9)7Ur%OIH{n6w)NzC =ڋYQbp;uw'g}n꽝WU).2K9n|mJ~uRRJV&&&&R%3b^o})f[RoQ]6)$!wnQ+@VJ%qi%Z ȧȺҋ-eԞBێNPqꮎB!Y·4WJjFƉkpU$5s&g>_ õ-z\>s2jĬ)`./^}Khvр;hm{+lwD&ם7]6mr^]c֛II-A7)OIѻ&ϕ /AcH'gM0Y7Ys`Ï$}⍻m.`9Qn]4 buOӯ_?9_tTW$7s ;8>jKhyazwt|zvX|~#_C|M#@?+'EO[G)!?OdڄYwKV WA~ʈU^bW+3Sn&}()OjFݼkqDqs2b5lvHs*_NsvE@Ia/ _({';DW+%BWmm25ҕ8V;DW⢱ngju;CW-mrm{bOW<;;7r\vڰ@ɲ޿1~&8ǡ-AW~OW.!Ct5+ ].[+t5v(#ҕQ.tgj坡x?J{ztH;DWF7] dvWW]U1~ wH] ;tq (5O^^ PV:>{o?R iFwI(s%Ig ?qGnWI{kU?`ŘO׵?NdKdZK1xGB: UTFữpeN\|օ3p~cJ֔+=/pzE9H/W2+¸#nft,h@E~"F7?N!lf5}Q>鱍3R޳Oh/|;BYQiյH4:qIoỦ`˖xfˌmYb'):2tg?Vޣq"RBoNZ}FZuU)s:7g!UFr'v5z LΦXmӽ 6'glYgU%߅VZ5, v`w?KB$`ənyaSjc#Ê : vԚ,:BfXKͶ#n+u&EˌwhbR?VrI3b3b{uPQڠlZ ~*;^vۼH@Rjʃ\TbX<<{[:A8ӆouK G\𺴾Z|/yG6]Oc[o /*}pcW2W$]:J ,ÆHy1!΃ - )t]0҃Vs H$1"+2ѓdG`FyFiH^$ rLӗ]d[_#H܎6᥯EVUS Qߙk [kmH0mR/"; a1&yM0M&5"[3S$Eұ(R+b_ $1٬>]թUR‚dNI!/+I!Hb'#!pw۞߭_Us'SaA[w5}m6@@F8a, z;KnCz3 @n%@_GW*#Dꊼ=LFR{XQBGmhv)0ִcFX`pȃ s^ϐB.dsV5U3@b-Ȋv1G!h$&e#s4qإ@J39(k$ ?AjD쐻#1Ȉ"#+Y{"W\1"xሳXeB-ӉTICXsM{6Ck͢:K4YdJ j@fB޼j6R\ x/zdvu͝`64wnL;Ϧeùe&Awe PU\!tBf=0S= ,Eahky.j HǐUFCm b\Lj~5}^eF9$^;%FFw #TA(<"!Uh@%5HOO JHvXQ%úI!>G&+b\D W}\!O.BvC~&DyF2aNV(hiQBRX0^tפ5zg RO` T.C127iâHY=+NFМ"5R&~V 5r^G=_85B'yQ˃>d PMIA1Xfr5WhSC[xYkʾ́;J*C52mP jx?XAa},gfC: ZV2#a0^ԀD\D[bM p|o,ygg j5fC1IpUj9]|Xv)&YЬlo(NIȰbɾT\N0JklB:KBר"!HTvƂ70&`j?®zMl7f.wt!kl00t"ԡ7Arv~/,tLԵdsڎI]C5ms rov;2==뭕y{;UR~HN |8N 5Ϻ@@)9N L"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rd@J8ـ@ eq5 cw5:I'P[%"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r脝@ZXmj3n8N z0N U@@i,9N d-@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNu9!ȉ8 h:z'P*:@mK4"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rB[OӶմ=r{}\PxTJݏzx ~wip6/[\?Xf-Eowg4U T -/;?~jX] ˈȲq3%m*~g<IH`mcjp`njhk2laM6n׋dly8YڧzPQuch6Eolȟ8o?KK|tAh-.'!L ? B{(1nѺ@?bbep뿷;&ow!-OP誡骡tste@t \7W骡+ȦWgL0nArLWu ]Rܕ:S ]6f0t:3ZͱUC ҕ+9 j3p|0t Z鎝JCJJ+XUP h NW $:ERuEb@tR\;jh:vJ8Ltu:tZ7Z](qfcZ{ucK/_k&&v[0jn*Qݻ{SZCp|^/_=Q;xBnvZT{jWJwQGU|JD[ܳZ/.GnZ:)eT!׮jNB) 2R!BF 7j0Z5صpChaQ[3 ]U+P誡骡T*]b0t6ǯJN I]zuap ]GOW b%:1$u/ ]5~0 hNW +TNWMϘL>]z?]Wah+P#SWDWOmzޖ уz~tPÓ+a;M< C+aGOW "uut%-Xe0Vj(I]$])r\`i`N2Jc+$Yh;2qw!՞q'ܹ[zc=sL{rŒ~f\Ir5ޙmq;pwCF3)/WB^I0Pk߼Kԯd{*eXfy'Amq9UoEX u>וQT<,˒%D_&#T_ИE>uyUF~?ˀqCkoocy?ndϒ],pߗ;{pד}Vǧ@M}5iW_jmUoۃU5u|wp_0|w7W__] 7?,*nt5f `咔rD,T8<{Y9Q|2VoѾ㏠MO;ؖJ1 \tCpEe)RrR2夬fSdMyus9FkGa#&˜[Jjl~wvⷷeF<]ٙT  u!C$sӠ 9=˵ʌL*F^bUr)GCɮVlɆ8*TT?OB%'n#.|). j϶nnH/ jW;dWta2}?9cEO\iTj\&VGFՒYҊdT6̇$ ZMWƖNVjh^Ȃ6uݑg[1P(F)%zfP239"Ka$66٤!Cd\D.Gp>1WΈz<̇ˤ~k҈Xx*#Q##)*2&MoT+TLFfB+5EoNgW}E )U69&)[Y)I ee\vԞ783Mnݹi(y*/yQ//.VQz Iۢ3yϽW\jL0@-BB W[Ox%xXx*ڞ>_6.k~3_1A,__jvY~x|yѡK˨x`l68T4jMp)z]mSzk\ Icهn72b9@cZ)E\Gd] {_x95 gVb2- i=}_~4B+GxFЛ&czyoƒ/1/ ZqyÝsAnAV sRax@SG[]/ 4\Jw̆Kwq)ywͱ.e^WLԞ4Iֲu5x՞EN 2:HdXE{/YH Zks2fMYMh+R1/Eym*``5~GXGtmyrb3Ce\|OЀ g+y秨4˅b1~q\{LmPH(N[_SU)ZwmI x~~3,l\.,eeI!)9N~Q/d)Q%4953]UOU?]Arx& e]Q]2.+ՠ4  *)c xsuEEJ9)s%\xmi7(9; BDC-7*ٹ %Rx6fvA7ֳfSzY?QO7MT>t/F.D(ɔ1( OwW%IbQfTX_h/e =._1+qd]ʹ dIe y LDA!C-jhzO_ q.csQFeXyNY:iRe.x"{H)Jhb8&yz~/Dfsb"=Ez8oQN3ZeLh {DiG*8cL+ Bփˈl?.:~ѓ5hC=FJﶖko8CS2VF`I) zS%۫qfȶѸ~_c|R+O~3>uNKFvh2@Bo Lh7pnMX7yf =;k_gm*9#+DC. ZsT#Ip-R0ж@{)I@L>DYtF1dcD@L.e[sv QҸPv4Qjvu#ݔzԴ>/Dqƴ[eu:z"td1agh}6yj \(Z0XHtJT a"dddw{p dW&Slklq9l@[墮ޖ(Z2_vf앏Soc_h]>My\#^FW?)u~nǓ5I* 8ieMEY&k%m5{owG{ptz"2(-3-M*y6{~(3A ']"EX0 bA0f;UT^ tKv3? d\O8>=>u_0qQ`眼E$h76CrZRj`5O,>5;ݒ޽Џ㣸|t1n3[hu~̽:T9v:(ہʂ0d P9NzF䭒 2$b;u.0|M! %p٬%6!EJ I|XaE^sZVNAzc DR C3q)v Y[2+0w}uzXom5UKg6Ś9]&/DLJ8NʔÁ EQkT!#~XX0\u0m x. g,K7zrR$^aLuAkߗowGňJam6ρ&A$܊[1oe?W՗N;w]Tvo -.XmIAw;3ֱ'կJ#Ll.mGi*M S#'>hGG OOī{ue!YfiZf?d~ﰲׯb7 5lDN&=ox0sv,<\nhrvW^ovv2;ݝ.&bn nw,RNs< _cwՏLf?m"d_/;m)+}Ecl?ddz dwW] 6#d<8QZLRN|0 ju$bȴ4IO_3X%wIkHJ*h,GYMRb;DI)U<,<9`UVP2 Fѫ3aW͍-뇚C6ą{oh񛋆^ j!ppmi#Zz Mv^w( "Egc1'm;ꓺKRC r]cGehr93b+!cI^5$JH  &.h"LRHkTsR1Tm&PuA -'m&&ϢR}rY`*~^K牄%rm*H4dCV@!et)2[w*Ĝ%_+ip@G"zQkm0&dIA$h8z ?b2johHx,Vk'ʡ7B(B*l6we}lWf(&tB]PY3VHU2zjF>$bN+_urbtSKʭ :O:CY.F؜A8I |Д!EJ]\ɱAHƙjb~__xѺńg1hڪY ɠ!D@mx&rrpboA20:ya} )vV 7voi(8zڶBv NapYH59h2XFk;0 M$kMz&+z 7tK|I\[O^'iR XK Qzm:wZb/Di +9J>>M7y<.Wm n v^iWq 'o;oX#&y]fYΛPL{{ 1%7(,t@"teT$P 1輨.a7'ֳo斞Co0Ι3iC c$l :)N˜6Jkt钆`mzݜq/-}#(:ﮜߗvdǶDP0fў:ÌhzY󻠊*?f!DVVoVhJ !YQ0*PLLNy_$TA8Z ^;Қġ!ɐޘbV#iTH@T!PSHIm ݶK gA6jB!dN%90h/םڠO'ɐKxXA ,+"7R @D @E&( /.:H2"#ytBY܋=dp$H.": * &W,iuFQY`}J55aqj]}ـll5mg_2f7oH 'JUctAKv.ϧ?lr>G|oUz. &YI&)E/%Wk.U=qoW߼y5Ou~T0fm"MCC֏ >|,{|rUkowfPo{:}Z8;-g yq_wTxu,yzijq!eh_}rIws\T].tldUΏ)6HiC@Y PbxRd)ejd*$uɜVܭiY]{Uo⚘τm+D$¼e~3BoMO!~ c{ hR;But\Xwe${BZ4hǎمdP290;HE d-硠 m=y(N2K'D!kQi0Km0Jdui @s7Val WjӛT;62L6yl;>}0]z0*mX1^}";xZ˶k-cAHZ@Vʤ4Dlc @v7@VrVn%B/}$eC9/Xt"ŁNZ̊+Ak!/z-d9OZ 9{ԯѤ[fcm{{|]o7 7{v[aE) Y{`sI7k/Yy=г4\$ F}@OR.9c t;<2wGݼ[Yy%._05 Ёu; Rs^;h*_ Q\7G=ίNߐ}P1NǾ<зA gkT|nSn]d[kFuLv]o[]}G(5N 9)ptמZvg_0/OڕA~3˚xKMw+{9HYl=bկot5\}tKN<͌gQg,^d@|)>8SGŢP|:X\~8NB_TGV߄bTWNW[j9;oqO|CŜ[EBܛv,KO];O-&s_``,kNG%W$.uλ/]_!Sj9rni}luK7տVˋwxiCoŇxޟ[[n톃ˋeWtPmWhWB{>lz9X~K;(wQOW7zqs4.^5rUks"?nA.my3*[Uz*,n*tMu]2y# \3YN`vZHB]ndIV';>5g!˯W?Z~Ww__.FoCxT'ا܃/.~uiե.Mj^]k-ge)C6 mY^˺͐ gVȽsĠy~;[΍#k !j״S1غ6ucmYmonu,v8\K|zLx.R3!Dgx.lMGB-_]Z[dA-:{YYtgۘH-#j=vuHtzh>ì*DkjVb=GsTKumYСWg'BzBuNR> djvǖr_G@֕CCr&AQ> 6dn#&l&J{o*XJ2#b@=ŀOI?}d^Ku wΒ>lԒ1Mu dm kUomM=#{eh@ {hrLﭸ{y }II6ކW g$zJuwj]v{ϛ?zo6ӃЭDw3%AMvX%uў=:-^w5+lTھn)u :S0AD&B,t)kVlxC9[$js.V^@VBd>aCXeۢ/mowK}.yX$m|s$3B(5 lR.\R-SeuIvzwc<᏿]S;=̆Af<6.V^[QuՖpXfÉΆ{ylj 6Rc7d5#1mTםúk]l.vZO{7?|de3oyxwnڼ˘gk{MW7x]&3~{6]ͫ_놳-i/1/NwKT\ò̭s%s`ѫ]Ǒ]pq]hmǕ8rc&Jt\66#`+5ȕ\ F!"Wӑ+# #`o+ٷB.r%H˕P\ȕjEbr%E&(WVް6n%ƺ7P9g6|uK?uUJLCkUobuhw;g[Gϔ9{h@r7~ByL3pPW\dZh1yPr2md |EŢ/Sw|V,KujydK|#m7Oi/IwՐeUPy[# PрaV8ՙ\sQ-\frIM >o3izy5K\>O~oLB*cL]a +)ш%Uzv/='mߡQm^[]7kɁqD L:WQ(Vn.)C1 'hy-VO+"djZҩ˕Pj\yȺ䊁=lJp󱮄%2"ĝLQ9N+ArȕlVI]utd;]9 W aiAs%zl&MQQ6r%"W=Lr%h\MP:W0`ﲑ+5r+5:uJ E&(WdɆlJp)"WLM]R"W+D d$W]1.c] Nw%Q\MGB;%=Svu-Os޺gj@7!lv頟]7M]sӮӝZgxUt.ßɖlt.Ouwvчjѵ6mb* Vաw?WG!9bFP(/b: `,FYZJ(Cq)NQg>'b`pȕEVJ(M(r5AY pF% 2-b%BiK*8]:b\>ڱe _*w;v?P[5~Na4v| \gF #m6FIm6zliMFrnQ|.rŴh)uJE&(WW\ 0lq2ȕ\ "W+\aNrAC6r%s+u*uJ"W+5d$W]1nl|WB;Va*6z%7 !CW(uz*@)o[<{0\R ZF F]r\={+*W+ Z\ eIʕ2g$WZuS6r%c'DkJ]Қ"W+AHؙlJp)Jh}H](Ujr4ɕ|+ű7! uu5EʩmBC$d`u€vkO:'{Np-"Bt2͔LORM^wioŽO٬XSiE;a.4zRHg4?1a.Ɵǔ4ȸgºWb`\ f*6yPBՋȕ9Kf\ FG?I''\DlXc>%t.r%ާ.WLiD~5rjGu+]]e 7zw-` eb[9Q.Wc\=BP.#`ȕ(c Hq(RJOW*lJpQ"WBK[WBؠHƎb.ZR+L]+t38l O\(ZS+4JF5Ӛp,z~ި!6UuLk/6b+\ s#4lF2d)3d:B$|VFWg+#HIe!wưsf$W feiCq'BM (R{lJhM]rctrxq]6arB;R΁%dri!glN֫J(7]`Ǯq0'pa专an)CZ)zl׃7,xT+u.bZP6uJpE&(WSЇ;KȕR6rŴ0uJ]jre(ɕ\1."WB*uJ\MP0o\a@6r%r+'BY|W+3L19p*uԀv`6]v]J K@˰a}2t!2xk\FF|\Z]()F^R+ȥHJ+йȕ\ E&(W^YsZ`ﳑ+%es+5:uJ E&(WAyKǸ^e+&W#WzǮ `ǒAapȾa4R^amJzl\1px\+u\ m *(r5EJ+>r%\MPBH_K׫l&BMr%U<(r"r=Jf#WT6֕j\ eIʕUVff94LN(MQs+lBܐu(eZ{H_!6ArHv> IADE9v=Iۭ0;6`KdyֹV>v"|(LWs+ק8q4W~mrC<屩+LWOzL=y8Vq(U`zttψ8">̅6|<buJ[s 7xS \BW։c+B陮#]띞"~>tEp ]菝 .#]ѓynHy}m|;YnՊZ &дxūzj ~X 7hڋC@(Wg{7B|n12 柿j.i}lDys}]Ok4nV˫z`9蝎/z?wJoR9߾voZEB؝_ϿKٔ>KuO}n?v3@܋]k-Bff{BV=|V?|XXQ*#ʻ; 223p@O| }e]ϙ)OO3z#/ӿ?]qe-k{b],ߚSlz߯^'eY-)z( iۆޤ &+hɬ,vtE&mVy{H  t%ໟ- z< \I!FBPxcZU3e+H0Sn5\OP*ldhZԘDT 턲sUj5t/KaU$frg?-47]R񭋊c{7ogJ]d鳴Ղ6'QhM R`ܔh}k5D " cks-(iR5.u2NfrhѵAǨc/./!bwkKJ;߭RxSU J6omҤ&Ds}Ʈfhnf(:#M1E{) wW@'%D+{4YV1+!dңaמtmF4*)sPafC|Bȥa1N{'l~%B!}tB (y"i__*&SRXYnj(`ɘu!gk'Ss.>7G!HUEzr7NTRZ  $Y1ڢ;Q椄,ۯp_c[%cKwSt-JI25mRhі8:BOk0u€"ڈ*`DD;, e! !kmMrzE5`=NIbѥ`D)Ȃ>4Jt)*k_c37% >{(.Q0iJ!B QȎўD }m.5ˎ0H%GxHk\4`)[ݫ N6h N@k%w~uԇ׷q,ݻZI @dr*j]v_@2hFg-K75!1ѕ</ܤl@ ]k05Z0+3\iu Qx07 VX4m:dZ{΂%9TUAP#{*IUJ'0'UXlW"|*V-tC*D4`Ԭ4 n sG  QA\R}2ɮ3O%Ce:|=Ʋd2 V^X szeIHA a ȸALAAXG @rJBB("2m ؍ttg-JC(]eԭ9+ƒ9 qN0tAC\)L1R"%8TR 3k'T bũ#3إ~.V!joSCAV(J892r(XT5o&Y#<;"JPAw/u~ 85WՕHŴuֈj{RQR@}Juv 1r 9 VDlZ"52*-%VSMȲZPҰ*4]AՊXZIBufp{ lhr@G{ŌUDZ1&"9ih>(ؼ($0$= 1M`߻Y;vL,m,iYsޜӾWzS TA׮z|ou/C&>Hh"X 3 o b9PT8xi76BOБV%SdIW=$ B+X(ITZϋՔZ=lo77]ZW}~ qœKA'ǸƸdRK@ٸ K }0QY=#igCWW5PZt *Z%]= mNW@gIWN 1@7P%@ɀ ],޵r]oNKu֫veַjݜ%(xyyul3eKK>W^b-i7Ҕ)ʶL 20ONśuŌ:5/޽?}CNzBZ ⮯y*FBۚ_to׫z_v3`vXke!'/NdXm4IǏb| 46ho^8eP6dZuzX9o8W\o)JVҞ q&ޚx,.=ⅱ {:uCE~yٻHn$Wc%`OcRMWWJn'ueVJʇn3dsQ_췇YM6홰5%k Q+iKA;Jΰ'(JPi-0Y; VKBNC+.4eռ=Y;w4)N|q4ZwE]V]]!JMUޡY;x1MgUiWUA{PmMԕP!uQW.ȮVAUA{v(u;{W. *h/uJzuՕdzZ":ꏫͳT&%VՏ5!xtq +*os\MƳgFV[@گ﷨Zq6VtQ/앓EG)=p`>_G雃q tW tLRFw7Og>~ )| Ύ/pBgMg_"w,uk}ugr50xx-\pN_w ~ s\$@x-ɼ:eЏr|DFj)Bx*b]̤$u9_4:D `2\ΜȒd' Ӓud3U;)huN Jc{uՕR.!`EegU;s2R֫+DYϮޣ2%(퐺BΨ++8tE]~'(;TWhA>QJ5՝1 i{%wz+y뒖]Wip: 1xJ6^y:waRW쌺*pMgT]]\+nC  : K Q{TWJe֜wF]BvE]%ruUP*ޫwZd "`!uUBgUAUUAYKѫ$J3fJWWL߂";+dne5Td5$"KRl0?_vQ) z L&4&*F-Yd8A#Z p'Y>JE|ڡoںn&k3RS|寊)nJ!L>fWJ ,Xm-5vϿ]:~xͩ|j,էcogVK &!KaRE o#Ą23EYK*FVL[(cWJ~^ke?ń~pzjW(U Ip\u|FW3|ϸnK_GtNC?FFϣ%ZT[5s:Ղ61.-Q峇_C)j~aW~'\W<9h۬q\oJ1{?<.;'Sgv믃i)VW[B1w}}]x>ٸ"H,O?oNҴ6Uew؝}_ ^)BV"'"<PJ5Nʫː&0MȢ|$6dD1Qt,m9Ndjm_ )k;ѯ(s3m' h=tcds{5&2F}IA$ɎǬWrP'$RM.G#UY!z2mthZhUB[hFh ,9I/ Q𱔵  |)-\F Q5 U[=Ms~aRnڳ]da5ΑILW |jEdvMTS1QH(cRqNC0hQYʣaeԚ+ 8j 3伐Iĥ 'hԖhie18ee%="G8 a5Ԫ[=CιJRN*CA%IH M|ymTxF'\޴ ւ1/Q"5\i awa@)\uɔS6qx'A6ߜgF(ORM24&B;xEi.qFwalIv4T%7ۧ}12hț렛;kmv̅crM7Dx7&hؔ |Ԁ)cZ1XĢZ%XƩU-mv4C|ufEf!GYyI8O;AU6,YGTHA $ZxRΚlvFfL0`3 \i}2@rF4E%rD18B/(Dqaɉjvq>:no5܁wTqJ$f^$`t=ijPAQ@j,HP\d D/E`ZE`F8ܝ]DL hp>N9e,R)inLVFt"Doi{F7Ҧf{5MH1Y2Rf%DL Bq rx# 3gi"d"1,XI4(/6Ya(UʛXh$h"K2Q HcpNz6w.,dJ/sh~+n!d!xqVfDv YU4xӽ.ߵ`4xn)  sL{E >&Q3S [hAxř_# h2p=KwqL-ʛL\.t[&v79I>>TZvdWgPvmsژS 51х{مk*2EOZϚHf8Ox-"1`#q`)T9TlAʖ'2-ۂ' F\%n9Ptޟ<;x<˻vNV:+_:b\2-ey!H&諅`LìД!~4!0ɨ`zfZK/[^ڷoxG2/AOBf%51dІfwF q! ºi묭Td4f1!s5Zi*rӱ[9Lkkގ)̇.K'dSjo3?\h$׍S8C?#ncC&A4P=G2Z:u\8٘K`9G 椤 %>i0QY`#,54m/-lMII|߫~等7 :j|RR;3&:坩>]оvc,(1|S*Ǹ(:waYBZ a1F0@5{7n5-(s&v|v)$>x1l̓Z:%P )ELnCb<άUmY@F6߃ h602Oq-%~0rU$ b<.GmGG4ZwdS:=`TYl2{*||Yզ}Iڴ䇩Ÿȃ+vKaI &t4?yFW\Y5O|,fߴ~p c刖kOlw{˻=<}>f#>.`o͸?=‘Y+2\ \v;ry|-?o'#\;T~{5˭JV05i4I*"ʛ< ( 6<<)DQƢcLC:K6\}+%eIFY뒌PU+^kCQ9QJc4N5Sʂt,lE]Y'U !ARolx'ŁY* hjl dHVo]32! $s﷤Ċu-J\07oWp8&2-w>Hi̓elWA+Xm6M8&׊0.4&"pN&ܑlºEلuMx;'5.hAɘBEǃV,F͡SK]VZ$n\gtre^i>z71,%q9)*Z0ZiMHm9ixŌrR#ڋf$^@4Y.\ +cٲ [lAVk-q+뿲E7M{{>_Yr%[jWmɲeʖM.ܙ3 OhEI*HƢjTDCAe-fp/<Β0۰?A(y>im@1w ZP)ONN2ұ{rwB9'fFz.,gF1ڿ604d" uN 6z3ޠI6+'Q%SdI9@vu^&JBpyT#?W~3Ӑ;RKY?gAfaveOLy ƀHR}ߏwSnar|w}??\w5wjKB5yogJ8*§ c8~I> Ac ;ePL#8<N`2E<95ڟFp9+ w@R,59'>E0f)Uv"UC8=T?Euj if gE}5.5zQ[r:j]_54e) |hc ׊c\B`qJ՜#Ҙլyv?k7?gw^7?7^.+g+F,}Ӡt<[ns AV||x ; AȺ}'!WtKhYޣ Fj̳WppNdg?ap#7r]kƪ bZk0ܵCdVH<^mx!xn>RpFa +P%S5JnM2Oq> *Zf;_8 9_U׷?zW__/xq-~)\֑$*ے{ 3׺?߾k%Ju]s#nѵ&%GLm%ߟP`$bPNXWUޅKyi'F6q';v7Y~5 G-)6B$@I}2 *`2g1Q\g'=á^-=a7C^)MmI%6L8kc,gKVhIYיt_*} tDž>;_F Šs8 y#;hSLom6)P"A*c#H$aNEѓd^ER}>47xL: ZoWlѾ}QezC b"%29n c,@{" .DEL1h)ջV/%rJX]CH.FgXj0ʀcVSlWsqPUIqo|F}ɴ!lv,k"U^t %h[nWx>(P >Ԇ,K,mNzPa}ɍzUbѴ.ŵGR:czB`\/'Fu}t)P7CWV}*&VRQ'q6 j944uTHu I(zڙlv=]GtYSDrĚ6qٍwP#G*űqq촣Ҏ,e L+dR&gr5*)3{dKL6";y8~hYs-6mq}O8ФBwܗ_H󧞮:y22:/>}y G2 S>TrK(\OzPLOq]r{WCikkW%20ю;q3i)4tLDpL buD)bhMBN5hQԑv6GgѐH3:(4ϵZ‚)y x!) JBH ᡏC<__&?&ɇۥ`׭~[bsAz=MO-'Z&G)d$0 x) I*I^Y *l <8vгz:ҐGS HiZur.˗"hK2TrѼP4:)U5`_A׀##7t\J*DNRRkҎ"n(s+&*Pt^I}1TV,..^V-mCV6u'e lkޞBR 4 ^`j&8=hJc@3ѣƖe2N*}b\s ƷI S;]Kej{\ׅcW8tybPYQ@Y`J"BrgaۇF&n+O't:6%y@U{g9y: a{ח_U)hg-CK=/.?@U'޼[P?/kSf?ϳ8_.'x; nD!jK}%IfHiԝyCjU?-*67nULP䚡(6ENN9OF¹~fp6\ڻz̓.4lHr$T."~-pij&vb^Q^aӕ OY\,3W6ا>{gBP@o<S:D!a}|"%~O밿ttzsjeٵYNziG9YcY(=dvm&婢QF)6Adrt"ɞE//2X-x;y0hTh3xi"%^EX21jBL$́!*y12Q= 8747fb(I݉G:Щʟ>9=gGsݠ6gyhfZ!z"g^ { Fh4Rd %I},Z`"}ԔWh؋Ĕ2Ymghu&+[G8%3xC-@rR!2T=ՉKYm)*#o}ܣLNYmtRHR;ʌtsNEj}:#I[-h-*YmY.?LJ@d24@B^F51tRlT8׹oQ-qOq^^싕ZO:/P5Ega˟Vil| M=iY{?r(*oިEW#i4*Å' 45Nv.2@=pí%1Ȉ SBp01gUA`H,:&%4DEwJTid,Ffd,bDKb}2f1=˚7f*b'֠?~-@]@TFTD" eȘ)( JDE /$p"#{YM4vw4NhcEDD69ۏq.ۢ6/ڼCN9_0\$rHRjB %UȠ>CAEA-pϊaJQmh!fbH!pЈ5P0*Qx$ #ՠ+ٌQFYd`D,"(Co8KrƠU)"J8vE+mUA'#y#@ 魡Fڄ&$Zq (9P%̈́NS:QϏ?G\,%*o vsk"D5Ȉ뼥 *eR*z A ' .pq_wl0nYsPWFn>m'8 oE?>RUjf۴Ѱ7? =36$zQ!eF~@FDG{Hٮ*G{\h|E [e^{^'OPGB*k(㈖B98 LD˔1Y  |%ːL,ǫ䉳s}7Z26M mg< c5춺k7y#ذӻ(}6k7?ދZKw=K?!k~ᚲyʛũU3NF"dͶ_΁W_n`+Om ISrΒ4 ȆQ)PkZ/[l4Mr*b,H9Z6\5LYB h’#Ip&،`ymo Aq.6~J(*F^m',ҳnHӻCzc*5$%"Ae9ZNy\RbZdf"k9v5e]/z?Yc Ⱦ BʅoР>j@D _pImţ9񘁱l+ ׏c%:AFyc< U٭PS.D$qM5M1kq,$u( N}UW vd|9ɕID0Jn, <\fV4e&Xq 2cK"בx*'t™2QRމ-dr90{24A 4{U]o%7x-%瀻BD"եd4ؐ 5|ގtC7u^HΈ!6BA;kfDj)2SP®)_NB=L.?u{2Z+-  }(*S̾LiKTWF0zP ̞6fq( 컺T]}=J1a,DhvӪt[ pVU0:{{Y}Ź:I')K4I[jBBӜD L/".yw:.Ms+avCr(jnVY]1yQ.qÄ&KLQr4N~;-rMgl&z_T*V껣& b~ۉvln'aІnw3S-0jÓe*nF_JdKC+}>k: m{[(ߗ["s4*"o](hJ1*uH@}2lJ WV3k(1$0p8sx1& J M(9ke7Ӧ{Tp^Q?eb`wg3FG`K=TpSs˼ rW7L4* i (ku.Dc1!Ғ:粼Z6, To̱%, Ҍ;ϵJ"6 !Wh,Y"q KHڪVXBXѣ2Yi ~W% XPZ%9ue*_%9S2#5̒R;IAd-/3 R1ٻj_͐ޭrϴVS1'%g? ܌gO'ވ\ěQnFܳ Qj~J҅ͦ.e.'-N?wn]xp }) 9x-}i;<[3KamS[ ūσW Շhv$ݴ0=R;>,JL ]*PKiB~ANV:0&<57Q b1J&i01Do3yHF%69k5IhY΄/ek܊25 ";!^uz 7'y9cTEs yGC?s4"vM$'"ӌH|hCv|Vdm UMd>W 3w8|ᴭ7jMý~Fװ~eǓ%g3 f tIWxKJF#Q0M]^?i^)[K%.u5ܺ^=.goH6$ݎ[UزĖeͭwjyݛSZw+f9 Gwmm[O˷n/q=a>7P)q W {G&CIJiJ-4J0?XrBX^KMPRrK>(Hv:;KKB(t3m4m"Ew]V=iV^*HM2Q:í_u`fٶI&T*R%4F5Uܫ4p˓6hJD(Q2%@qUm*zX]RP`u؟0T0#m<"D{(5>qega\ 4iqpq눹 Z8 Q-M _)R"fi}Oӈ$#.HX; 3"P/I)*<OXD˃Ӑw̅ UcqN 8W_SF$78?FYF㷧0*>Lw5LT>7vy=H#$fP[g1h4S$x@ϕd ?Qtݛq욲~6%#g%oXOHt6#P>j9 7 7 VyG{gboOjZ5?}P''+vZW^ҩDN}Y7y\oݛ?}g@ٛ9{k e=nWHXmIu'.kpoВ7547iЊ49zW49%θ  ш!q:]i#FIiO+m#I ryK̠5f{aFaTY5}Zs0N0tmA>O}JY= '6AeoPf ϮV:R H yhRɴ:+ ͞|Yޕs 9k9\F 2j CE8ZVA䡛` *:i oc3q?tL5(=KR>7G1bj){F$0(lh=L@Yui=IJŠi=h=!EDF#݂upQ&",G˻;8~ʞ3?` b-`^+4daBΒ rtJ]a v:b(#Bmgz<i*$S%fyǒAXcb *Ղ'@Hܳ##]q:[ hb.03ޚQh4nYB瘈`rR3iC.d(1rbaH=b~֟~%*MxnΏx3*BXhYQ` Je2#?be*d4f :ss=]Oy5c8v2qF>/Y1mBTD2MA 0(zvاsAwO9s1d xiRVN&nj 1A:f'I{h|uo&`]B ՎaF Y /pŀ;=}LFݮ?l( 8[y"[(„, ?{Y'{-Ңs|ޅ뺔둪돽wi?㝟#I{B-Z ԝNY 5f0Cghˏ#px"'q+_ff~Ігq ߙP-Xomザ  `6+K)T810Ȟ m؁/A' # L$!5+x]d*͊ G* l!\} =3˜xKL(K 1mЕo5q, Oiꢽ?I>"Sǥg1IԌ+㹰 IBCrQDB&@l*3N\dQe+SB/"-҃JfceC&nCųqKe=ŏ?5qtTjzܣtMErNNە"S\`kFГjos)aw/TРUҠ,IY J!2nc ,UMdpk)I (`20+m| "dBdDC&!ٛ@a&H>6"ca5qv6_dYx*~G#vq;;"3.ZKAl\BE&5xe(v(bU&d-e!bA֍匢F]R&"? F͍D2J HAeXMUE)uVӒCU;5)[TNA TZq+N!#'QHBd`t1t~~Ta58vE 1rlXI46q$.i%XُwNߏϏ?WH{Do]^7 tj &x&yb  c=\M61qO*ѽۡ7-*g&x.僞tû)܏=cӫP6@ۅ0Oř@hYև+&ZᤐA:l2\lVY$i7wvrT0(渲NtjC{PFi]"rT xVpg/% Bd>Z&l'QY.E6Ƥ5WtڢsB87AAxQM=j5ȇ8Wxh|g|CPSD4JgoyPz4q Y|K cncљ} T;^?Q'g}͡ԁ ʸ3Drޱ bt r&)@?,\cjZp g $҈ @2hZ?FQQILZ,+6؞l1NB5LdNX41cYY`6[DcH8{YkW3[pmQ'li7) Q)M)p&9qB /CI#P&{L |VO s`,2["BF̴2d!yF'E:AjP+N6=i J i2iHUVC )x+YJRqTq'XM'dXɮz!!n_%3YkTb9KF;]{f##=RFaZ /@͡쬭ތv9ƦӋNF%95<11b C2I:T P6b> 89̣̈́y,I2UӐC2!t$ 6.M˲3H/nLAiDI@Y$$t&Ol0U=dRooh M6.6JMsiBܸB TCoV!yѥϯɹ;5cC.|Y}j\B18ťmKhp'%Eb6 . m}PLU5noW.>gyKE ]šډmIw6AˎZ,5hz?=7d5sN|z)d:D<tk7lᲔ|:ӴnzJL$ &+HL79kQ.fD#9w o-:v@XɹLM 1kTF9 Rmr9tL\M=gFu.jg4փ!rcJL*gZʥ(x=jlQ[fx>+&YX ؕwzh3Jn0P&bt62-9yb!a<ڌy pgE z G;tTI$5h4hY8[fEmr8z8KiCfuI&KF-a]a-sRkA3:)ASؤP:,!tZXT4>pBIFZ@t 4ٹlE.ȍVmTÒrgjF3^IǼZ1Dt*5By 4 *nmtV/ g4RP-?um`G̏]gm͋{݋'ڽ©OZhioF@&d RFs& -TD$Z_8gṀєh2E0^%ФZEwRsq$X^E˧7zW`pL>zUc+s8t*R WW|DUuEҒupU+X.gkXHop}<\FR#+ܫ".c"fWEJo>-_1`X]?/.ZpǡLtV%l~Q轧]ա͝w6cnʘQi< ۓNq;Vi 8yXu҇tonYG$¶ڡXP6 $cFS%Jc;>z%}~>z_uik˪[ZOfŋX}$Sʳ@ d&|h)Uq0$0Dl?Wٝkw)Q}i9/I)bA;0BhN+T:EgL,BΒ r[Q@5H (IcrYFc%Z8ɡL#Ӂs-eֿwLǟ/I`]\fGMԏᙿ?Čdlwkx^/I!ˌNw62$oFh\ƛ&z ,ZgYT34Ѧ$B)ӎJZ "wG B0'r$=,2$1Z&Ԍ:2OU%G#@׬ɧUH.4#K>nVղuG[`6]F5٦{IuQkBb ?E -ΊW*⊕yX*)F@<x:?=yBڠcXS2,%>^cL:;xt e=TAb0,=n'Fwt\:̲8$eZ-U(fd:pcr@Z_M؉^7-G áCzQT۳;?@JR / 5} :C?w1X'F@)[= /+'c 7ѓŖC"I0> idc_"iI~]8%ɻT藥^8~5S ~~,0;)}j_S:fw1ew> S٫ӶOg 5Oi*_mU'JpJzy诗?dy7,&ǟ_.a'3NV|y_/WR|3:~K$C~t-οtAY:u%qpÎG"?wZgbjElWy*HW)e85XJ ??Η-+$]5j7w KyWQ_e>Nf|&uMԗoY?T YzEeǕdEN;۞ʼ 2 \$ (i .\NV.mQ,|78 M7ޑ}2}9`K_Rd;A>uw|~ O:ټɄλakL!kVhn42s,Qi\ekw"firhCޜl qfIe,9B\یw؏lCTv-if%>{9]87--{vA}H]朴gafdΝY\#Qr|wj6m6{L>uhp_5ܳHatѲԵy޴ \j&|0#[!@ KXIdzΙBjZVE[/c-Mdi ]ꟛ\JZe =rRyo9@@{vokon? UQy*M:u(3!i)n"3A9](e1HS&r4 `,IzR@l.Œٙ5qeXMXVFNXHpĿKXkve'[˒&Yg?Aó#6D [DAi`QaiIG s,(,hY-Y% `{!EM = &of6%DI8ۏGq<Ԯ6:vEmYeڽ{zQ@i,JXJ83i>.KdȐ=ɯQ a ҋx6.x2h}ݿO{\h*UipVz骬_Zebe,]suW̎QV|(Ǒ<%/A;nrr Vh-c `VOP0w#ɑA'nd.m< ޅy"F* h"e6"Ί (Adldb&]բՇ,ZcJ6ָ;jJK"gek]v#,d.r)1&֍`ƛzJNikLR&z˘ˆsxpHNCm>G5q =wOLY&x E3qJ;\Jf|> iCn][SIs+p45/b϶.Y b}wg a R+BBLdOWV:u]]o/P !['[iZJlZAWtP"GZX#JygG1hMp^4"9hJ҃&iH(FpD/I_b]NF3:Ձ,B߿Ƶؑ PkF"HY4F3gG?_?4 |i'4U$F۰1pF{ {rZ9E]n\ѡ_&n~񿜨wx1:JIP8q.T 2.)11AѠV:ݧoa8Q D~hѥ!4cj[oD&>-0awZt,N) jG,B&$H XR䓁.)2 i\SW64e(*$dRiUD_UI#CABze/OO;a̼N⭀s&oJ1PlΔojaW>dl/[8S  khϪ>>3q 9Q}#2Zٶʦ-A39;n~L?%yOyŐaovU)jL'& H|᷂M;#T_VS&\V#:WzTaoFKM_2Y!GRj줦Ŕ9>@ej>0׳2>X$g 1 e #/2G=f'zWp$7|l|RJW?</nB[(4ߨH u#N\ZK{'nS'_43^ 9zy e2@C xޥqS4^'jDx[jGL Fř\zAÂSskJYTDӛj˸CP~4|u[-WD"]rS3b1H-޵l*nh<xqvp2;r%o9zr^ ߝ3zj|ْlg$!SE}9C΁Mc M;.}]xzfy7zɆMwOy3MEϺbqS9m1"}F=bv~*fٗ(Z GHtb{*bCB1' 8R .*J)9V2oF :ef0 J8nb؜1:hF˓f.[]}CЊs0^xW|xz,\L%6b%bk MufjStE_]&ԲXḒ9lc˔4W!p,qbsWi&Ds׼b͜=AUT9Y{KoHsyj%27,DG,SN'˯GgS/fR, a214J &B}vd$HH!!NsǜAbLP)1 Fͧu&I(g-9VlK8bi5q!XjBHF]ٝz^>ʂ<;5_]>8N00v%6f_ b&Fx=6lR~IzJcK_j[rrb 8vjKں$2HiAL%n1DFbR$q[>9I5-;Xp ZhB,-"rF6bh<UuT'}#-]dfCzmy>̈́3(v\|#I mOOGVtCȘ:87dDٓJh>ɖe2v&pkA=֊M u"Ndi@RMZswo$Q(dYD?\KʙZۗ O3`vcow͜=k&E>;rKJ*`ny}uA j2?,8?Lb;un yf$5 't:S{eYzANL>iU ^͵UǗ GFCgEbmqqC?ܳYV<|lyZkģoL\G\VUa>|*P^8j!g[%5k-kh w!BCcI.Wf˯Krkp~s6'"E>I=gqB{{P-Yn[̏SgCi}ѵdyʫ˯'7}`kyκ=Y?xVs9{wzlX~:O}^y|!·vjVf}AikH=~~ՏvvKFzg￟=,~y:f\t}Im/gz_vݕoO~K-|㡳6_out=2uBmx#^|9;weczhnu07H{.[5ߞնKmAN\s.:Ti{J#:h»͵^:tkN/N/`o|OךK4ɀ+Ppq0MXcu\/IxڇB$jkDmD$^hI>\6oz =sΈdԫ#BapFpɊ+%dl}?>8}/|/Ɛ˓26$u?X=)^˿V\6!"Ց#Tϵw//% Aq7 q֨kxy#;>4 @}.;lKol,tv[R۔ڼeo^H]={Iн V1QEqA$NqqtR*'!EF&Wl['XZ^%Zj)xtt ~sk :?${ B">s&49G%H>TjP}4:&l4[n7P8&0lH6Ј5seX+IAp/Q\_ʱbb~Yw}3j W؊517' sMQ9A} Rtfo2 >KBȑkԚ\&Rgm89s){ LLZ+$!E_:kh ^D~xqg668?w"hA>9CY/J\ ɥLrmYɩ|'ug]"[=XK #KS[;?h7~:[zgВ&(n4#xo}9L|ʧ_gW: ~:Ssm_!Æʽ.6x6`I01zED=b;{ohYm)J2dlONz f5G Pmv03Np{ʐYz>y͵̯ѼZ?C.y&}Bo""Y췷-5j:˗HzMWe:JEf7>Yb=8{:uW-»ټ99YwyǛv'a2Vti-Ϥg::<W!2?oųSEo3'Ew|:kGZ-mD>K+9𩵔&?޼n]-}lObqu0)CFeLZ7Jf+rY,xn2m[ϟC|0ڨ̨{6{IuT F ̣*ҨUze5Fد~r}s1vV'9[ czwmyơ;C`BɳfNY4er9zW/ozwGnRe;> }1O(h%v5׫Z\"Z ~k.1%qͥ\sI5銀qEWLr}r9rա wZHtIjjaP帺ɋh$WcN&(>߁L~דšr"2Lg>‘F8ԞV>)V龹.Է}r9[3$:M,yGTAuxV/MuetTIJycf>Sºb PxH=8y&Qs,R.v'|pZ3B]J_߿+tiALU?:i#Ih{D)S,Xʻ( [de4R8UUI(ek+ sθ{w}Ũ 3c0|8E*{[[V;/N䨫ԕC*W3¸]-RQW+Ƌ+0ʣCSyΗ+a#޻{سz-5.؏Q BWm&]1pE"\]1TCS*7u%W+^."VWC-bZ;)uu4銁FW]-"ZT?^WL)uE"]SXWc-bZ++tQWY{:Ib j"X=lR0I=-15M3C4SQiag_SVy<]/?JX8It.y;e56-W޳S*g2b]w߽TwЪB|jfeTc`URھBN?POǸj VЃ?Tj 0P?CͮteO^[S-f֖-Jˏ8& Zc)ײAKY`xO\_M0LBЃacG!:#mE"+U]1CQ:PWT;YcuE(T5b\UMtŴ ]WD)p9D]9ԑK(]1f7t]1SPWQzF2i+㳫/GWaxf,pߏ.!ޓViI?J=5ڶk5銁zO\hPCS*9u%C銀rqEWLG(Цxz])\Eb`qEWLuEƛCԕ+z]GSV)]Pk?p* XcGyԆgv͙nZ7h.?PT ^~3Q3 ٮ87ha୬(5z:%%Z?) z-R(i*{qm5LuŔ~!ʡFW`5>L)uuⱰVv4l=KW´zbJ] ^Q^bU?\g]{׏l+1jۢG )+{pqE5bZ#+\~yt0*[m{?^ 'uuhӊM6@;AqX#ZpRdV ٠Wp~ Ǥk6hUrܠ Z8]ӣFއ~f9`(eM T5b\_MG臮+8ue*W+• jJRQW+~$+6^V+}v+U˔f{<ڊt]1n=Nv_\D#ѕܰ)VϰAӧuW9Gk~n`3e]QW=kHWd=b}OGkuŔΎ:@] FAE"`:EWL]ctuֈ XztŸע+UCSuuRŪ"]z/uŸgWDg+8u:+;g ϙX#tN{Gm]sz[z-7h.ܠEn8M\ z6hQ XȃޠuTؠ6u-A/)2FAƙj;]gAjtŸjÖ)p:]YO%_q}uŸGL;|]1QW+u5uDW롚%L++jʃ]ӳ+F]]ƭÖi]qA^ω~~K";R>f8)LKi%dyׯ_]V qQ{yZΨR%%UZ(Yc[+GKj;-W_?<O]"_.N(qtzu}~Ieu54]}&mu f \" 9ڇO쌖NP|ԤW_Pc] nuNN\=Yfx[Ry0>V_G՛U(S Q~:c>zo-p]n7gYe^)?S[+9EO7=|뀿$pUy`J\q%Bq2YD4+#!`21.I|K$ANF/k=fvE˒O5/gJɠ:湼U&`ؘUqMĭ,D:iT4Qc& Pwڈ.GE Q+) 1$)!dچHYFҁUc!=1̦-ZHrV4B2}*UFh#jUr4E":iw)KZ[J(jpC_JR1'* Jm ^Gc0Nz/UߊnRuե:(D*FBF-#ERXuA @=IZ{e9l45f*FT)lt2F$&GCc5#>S*J0ϒ;R> -ݐ6tcAc TD""MkPAS2qIAYCi6:Wz B}gro۵Y\_T>L28BB6ɠ%YH<%} ڸOH:A>6)1~sov" Jl9P5Х$TH1$xA&g#>@vQ7 E)@m.ttBP˗ Ih=Zja%,jWU#_*!>H.JEhoQ-CVԞРu1ɔSmp(L`iOrCdh5T P4gt tu ݂76 ]=(,]/#KMZyxvE%-@hn(PlA>ZN4RC;pZ(8DUPdMQUmɜt2GH(15jSj@zb,56dݩ 8⢬UV}cUɔ1T1,KLwVRPwm颴:kjs#>Ԇ2E"Q H.9msCWJJ[2JP#o,Y L/  Ӥ}OEjI; U=OU)yA!Vx XNUgdA D*W`P4WL% S;0$  .1G* 9QAR}a<*#i2CUK7E$^\:H'U %;&xjPo\I1#h2.P(SP|(!,Lh-WMU(`GyRuk‚ A{b̂G̠ ̈́Ü/ -TQ6%*)ԙVNecDVT r V@@'ӊ! jo]B挚1(J8ٶHq`Q e o(Ez@_P_* BMue"RRQTuA$"zMhf$F^,.z֮jEQh_U0Vrz%FࠄbHYV "ѩV[4,pZxP>fR3&"hBA0H6ACoňd6cVGr|b |,5i9 !p:}oAؾ1AfuYvmZ ?/k*Q_>M&qhh6y*Ze 5utѫR%lP}`UmA5H=kFV-5*8t(I B9vi:LDi4HTy2Z;(o *"eȨJ<T9ye|^8,*VRM9|*i5kN@͐ڲh4M`UPЄ7 T*[f=+k/XHK-w $afs` "Q}A׌I'y*kt.Y:ֶ%ʍ:[tZM,<nv5]˞sMLn{0f PM\!tBf=0Sn= XihZ\PUFCk bZ~ڃ2 |vàDy XRcF6G= Z-5h+r).2TA(< U@%'' \ F7lkŰnll+dOXW*ϕ+E@[)ou6!3A v*BV% EiQIUk00tֳ\:$X!aQЏ'#j hNium:0r#SJ=ԨzPpkOҞ˃>T PMIA1X߲FՠsP*m5o:k Pٗ9pGVqF D e:rfF8àe%3J 6z s!y~)AH{k|T=ZMfҁPCXPb.hܖS[1mxx pQi0j4fC1IpUj3]|Xv)&d,hVBMBE!2SJ,9tR@0Z Y\cy]ZF+Neg,!zL8AA{ի.ōz*;dbC.`:`0DhC^9WMpU:SZ9Qm$s!& >_`s{7|?J vEJKh%fF_/o&[zyL2_Oן^N~w;9'?,Vkznu~vƅ}?g^+\\lδO,b瓏Ï6nnÏpo{>}Mq{'woz&Tv0:~\sWVɸ;='z7'P{a@@)=9j-] DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:/fL;.0Fx@;=kp8 ,@8f DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@zN f^m.`2oル,9C~;;'\+~Kګ [1~@lslU{^KЦ'aco_\=|j>_-fl7n+QݯnA7bf_ ~Et|^?lׂؑZ}Ɲ8;{^u9pvOvOa;ի ]jpU;=WW"`,R/?9"=jNJ;|ȵk+vP>ucZ❺npnhɋa6Np1I+FCW >'# %M5>KNGDWlx6z45:]5]=Cj6X#FDWˤ ]5f4r/{Lt|{)]5f[S]EtJyUA߶| }}{1oomY/]n{TGt> Kp>`mXR]tgfB[oGQpM?L jt!ݭ[psJg7~ڽ_ N߅uz}y_{%:j]ًi/ [V9f\J]"t3a_o77gh[wF{_ߦ~7]`k?~5/pa#._vٮ{ }Ӯ齮ܹy4C/ Z^ܹ7M-o7?n^~i9ggі`"XL!T6YmCF]4:A t-OE}b8[lߖ'`/]bVpF.|.#cWWWb>G`^h/fokaz2k/vKPP M͇7˃*}WgM=JHpBZVoZm>r5}_~u_$woEkȝ4=r_B)jdK6{Aw[RƮy؅M*ۇ@d~V| }]p鯺{#8qp髡뛇,gB>PBWA+a?ha~=or.'Xj1.g@Bt@O{KrCPPiJm6j;eL>\0k&Dx[n;+kj}isev:bgY8EXQ\ot>]˛lvc!-c- ɇe>Y㸖2? '䗓_7܇Jm۾y53_|v4k?ߠ{#PYB/$ӐFѡ컭G&.%w]ͣt7 9qM:lw׾t!hu*w(]j{dxzO# R^Ϧg}eo?V1QtNvUЩL*g:Tk^r>1_Lwx|n 4TXSY1ǼirĶñJD!CReLpөdfwM\ BAwI} G/zoe1 қu1q$R 0:a#W%3'!T/JUgnCtZ(<6ѵh>NJ)%eJQY y$Q#2FX-c˒cSU~O=ֶj]mo9+};f`q7Y`f Ll]l#d[/IVN:AX"USŪY^#)Gpw݊_**wJ 'N&f( 'tgTpڪ=!R@k"`n*)Zr*PsY~)?e|Yzps.2TPj-hpΦO i/x!CigLg kOvB$KU X Ӟd3I(R \9<0dMvv3D.V+,6ECTTr^ii Ho%CXĞ@P8ۙ8:@M/~ܧG8h_NK!v;؆o;nwy鹚.Hc~鷏+UpMQVG!WJq$ d6j, E|ۙL &DB%dĒ DDA]ogv%g}v<]1Wk_u @ +@ Œ4JuYS1O'|}7^wB"5`QT]n,z[,&$;9< gޥ^ Yɾc %lv>b4xH6蔣 Sd,ѣOX B3ňQ0K` ICҤM Yd"'J3qy7s_ 9+`1[uy.Ǽ(Ud0vz 00vJ성̢c x3 q̖HjzBGdEʱZF d*j.dDBz y Ip Y HD E1e69;XbaUo&kHDCNzw9tHO,2SjIU*1HFG]b 16F kLb"D0k- *SI=yBXNRJ_2ѥPvl לrCCh@  bJccZVY^ᭈ!D$7SːM''ۇBF4@ZdtM%0VyW$o_'47UX|XFp, rEGlйׯ %k~#7>C  yOƥGav *.I3&{z 5euZQ^,x.*dByOV`R)c6y1a{$m=NzGZ"%޻e #JAur^mc&t$B(F&k?(\>t?ۡ+Pug0ڜ,()8!@dPQEQDT!e H E)'Q 8[tyD)" ?t}nCg; }u;w͌5ܲUfY| -;VmEp>/}4Wkkzw~p:ᴰv婥lQ:UrxNoN>33SF"s:F'TBrTy?xf+88z3T*uYRl%J`` % = O"ba ډ3tskDv.S{=T?uQ\Zߔ<GB/O_לuy DRYjPԔPQ'%\S Y0Yf dlmCf)hDʇnA\Δϙ;R< AwjJ*&Xds`ɾTN1WA!GABDFPTaK/gꐓa{*'UiK F/q2I~Ew1/:]ڈlz<U#:|uqj*y^]L)VMMjס! *0>ǯz~q}KGI}d/ f,-˂IitI:QNf R>0Z ~٣_Zu'|`K1 j`-2zZH2]1^I Q%Bh.'͆?QGlu4g/8VEb n+Wd^+ҟY DxR &Nm׎ɸa5 ۙy1[;NuzlrNA,lzw>嵃+ڬҜU<%UfӵՏ6XǶ{ nvfqqNf4gl?ͶA;E77rw뀖7ܯsqw{ n||Xʹ= /)oHZQ海[j[,ܾ9mɧ8D۞2s1; q!B6Dy6f=_7j.h}rj}ymm\IvFb6T+\;ID-X,.:j@{}Ρ`1NFi)X!2F\'->h=rא8.zrgגk|v?߉ ^NOjs0JVIT_@\ X1n)clL+"A)=0p,T.9E% iuFgokԵmttrjQ̫RN#M6Bu1]R;,??~[DoVF uS(twjЬ6Ц+^+Ny 7Pޏniyhq{ mo+?~.?x ζPߨ9>*l7$j-SYQ aI=Fm*;6Ĕ'Uӽϫ=s|>'^W뎱:s_"q\OkW0I۰mߘ|\4Iʈs1=_KglbTFTWJiZF5>} G0XL|1rݨeMEc(/OH?cÏ>ow?ks\D0G ` |xVw5Z>k'үr;~y*ܘyjd 0 ]%^.~|dk9GS(6&E *DcVi*&OzjriZg=?dP}IkCjTWKJA3 !a_}Y_]倛to<;Q:7<[`Ր2;҂7F*ӯJ})O&@yPbB癆]pѢsZs/R=ڨYCR:h=M)Y "u}v%gC{;I חq;I7?,qh>uQX/H"/Jt2ֳsYۨA\2.k r,wU[Y`gV`,TWĐ|Ь->a&iWQgѨZ.Y1[n8;X3,NȟBe1١!"~v֙8Y6jfE}ٖ ua R@qEFo XI P."%&~I TLSV9Q0O NJE"Ef! mYJPG,$T^C-։juO/?e[r.@@q:QRg K&/u $/A.Rd&N&^wnH X+I%{lXH;piEtPj"",[ 3g}1Towy$E1 ~suCqCG1#dbYW+A HEP@h+M㫼%[::ӐZ:p]D:Mη4-ko _5Kn0ƽPn`?_Aw?h 6Ƙy]G;ƍ%!O`N߻@^b<$%;1F#;v|SEIƶlQ,A8E4kboh_HtFq?bnv_S^ >;a4K[s@)RWZ,EQiիiy^:yd*$ 2UT]0-RT8@6sz6ziwHC*/k0)*[ oE_/UF^k>rYGYGܟc1||J ՕhY(RFy0h#vо(ehsxhctwb>x~f~־t[7;뮂`e=R}Cf@җeL}h&n 4yBSCoIȍFHC:L6>䖂ݻS~6.>6LJvX3|Ftದ!~VgӯNAv"M[- kkEue;ZZ@PQ& vM ֒Ijz&H#iV/Uɰ}h_06R4*Cp"D~2S5 ;3u'}LupTԙMWFlMϠgsN-dfB0hMV-2G)SH.8+JJcRPu7cxj|LU/W=-ҭcz㙹NJ>&XBd/bNJE g-S'TU*,*qF"ԗ];SZ 0hi-ܱʖaٮ^᫵3̲Ei*پ0x2X Ec{l* 9f:&|+x.ߤ=3k#JHd(QۨdR `Ҵ0f?0zJT'o<<-OAdOR<m˭P2YN^nb@*,6†"FwN[N_Fnl m^7Y&]9~dU#ex"0e/]ſ^N~Ji=G>_O+|Y*!ىDp:+ Źձ_eSZW%N~_ثɿOܪ|OEXW~e/JK;O4zQ&+w y*^O>^1V;<]hx< 'V OߊV)Yuηw?DJss'KgQԾ^S㭭vv+w=vwqR߃'>!n#.7^QZOic~qѣ˟r=u.<9_xԹ}%Fd7wѿ7WOqTR__\^RikL.n< w9SID8$nYYw_6TU(e"xm/Jfz~w_tx?Mя&Pcߣ+e@S h/Iѽ{~|m뻴] 4qn3Gs͞%,]PG-]ǯO\swpyuaI{gj hy&$9i"Gn%ȅϏe_`{}5D11:C|r7'Z>Y4.= ͆hG{T΋^cs(nxIXb)fP\f8˄|`A iy%#ϰ.x-<&x z9) ,$(.sAi'P%z@z`,g͠oyPC{ Ϫo㛆qh#IHtOoi|ZiB/2~kgCjpځq,F.k2?'-cj1F`PTa2)!YΙHBtU8+@' {Eh-`~k؜}F C)(.C>Je.L` %q_ACUz6x{eb3jkyu=/$a\[υS;]Rr CܦTX[:+f?P f]Ŀ5R"UAb9M)X\{lQ|E)\OjRgWP &a +v%kc J=_u ?%W*?P/]əT tkn<(3QL_4`^ɈSQsK%U%RY"m*D9R9n\+_R}79O?[73vb!JO\od>#inb ߓeasat͢y\n_|~[%ȕ6NE6ޥ`a(ȹZqJmc Ec g'dB ¦4:௣lAߎ9(BAVDɅEz&vor&v5s(wJm3Hт}|>͖$yd1h3.Pp䣿t gs׊$aED<,XM =d qeȚq,\z(ᔋ}ݿ=+ڹCirg+ |pUVVaj `C,JVZz:GY)_G~S Rm>3cd* Б I>P/!'t.#9QѻG?~$w>ܼd3hG@.RW&+v?lnj%o7kuQtk 6=ځW( W$Zpũ:Hd#Wh W(.Wv߶qE*qu[i+ v\\j(#WJfkrQ0jARkqE*qu4FzȐ˅eWn䇵vYaqҁh0}):~jYbiNqn|3sT0co}Ky}>mo}2.>,E@C,`Ya>N.f^j(<P79mg?%܌bn;bW7L֦CTJCsޚ7w?7pqödyxΫ%2"m5pK\aO72eFJ4SihsZ_RJgcҲ;țV%+\FEM"4`{mU1L+I.@-0U &j\=FchnWF !Y5BRZpEW,:PvT1jmMM+#V\\ ՄɡZ&*#WpjEc]zNHuE*Gg(qFZ+ Q Pc8V >LTJ3ݲ 3ƈǝtlM;0:ʙR,vqksVpE_ ɫ/\܃[WԊTq"VJ* \\gjU Wr-kJZW(z+k\-"0x U nG\!gmEB걮H:cWG+-f9S3aTSO4plNplRج17ZeRQ5Wa$x1eIF&<E$K X:P^ k{nGT𞱛M~!ً fjQD\jl^{M:|.:\myӦ*\`l5"ւ+R+qE*#Z U+\}'=WұWG+^Pג\U5_!v;F\( WIQ HGÑJ;ZW_ܖMNX5$COwsઓZT5prpF\1]E"U+{ԚC%wSԈ#ĕdR+ XW(j+RqE*quR=ҦW\`ku\ZyLn*quҜq[uWլ\CFg(q%8[ApQhqy*#~ 17}KXzN7 d-ShڛqzPVVl}鱕^4l{.5alL=S$A-0v0z4XWB. v&QO5JF\!R\4ՈjVFHT&we [^ PXWVTq'_`˦@)NAznr3M=PB70+qks  ɕ#ĕ0eø+,W$arԺ UF\!'UEBɕ\CmOM3#WtT4wG rW HCZmW/+-8 / Q}nO:JHecBۖ̈gx_lx;`ˈkGTȠǴag@ki YոɡۼRqm^t5(`qIfKPfcx1ʺAdrvlW$תZpEjA WҲqFq&gzA{-y; WR1 9U3Ac]CW뢖q*#\S'~y i;q8j`}=.wQ RVZ;w.M"\`P HVԾ|Awq>q%T+ \\m-"WҚWG+B"#^{w1k4`'닫f.)bRL]?,7bceRh,QPM{WƱ@X 1^ `΋%< kDn(ᐲ }gx %R#Cj Yi_?ߠdxs5T形{.{ Li9H[gE3 Ӓij Cm!ڛWڏ?Ko tPLԇ*{yEu@ϒ̀j0a^hEo'N~EbWTx"l;x0TX lR3@g_Y+y]Nm z54IUUz{MmSkgJ[K *[gI'!xS/& 0I^@Uq0i]m2o IOajq c#^orVZx#X瘆= п0v\i9TO^:ug}_rgN~դPf u' nlnY{s\Ujrŭo h^C.x̩PW&km2Zvށۻv9aԵiHH{ X ?f]X\V(%;y垢ː<6RQwQBh]uqI7q9,Tt+FIVr^47q]Õ^QLo]N~iWg{jnÎթsx>mm;]9yqŘ)9Vp8Ii6>[y873bCn˻EzmF^Z^i8ӣqq$lLL|6]9v]I:oMi7w#*oh[yoGi-e4nJŒ/PƼ:MPƅNM8xbgpi V*jɷzknG6<gf&?Hշ0ūWgIiXxR2t'WqSrDp+\ \Ei 2\=2u{'xIiq-,\qV?Ijdv]L0M[N]zK [h}0qϡ7Σk]MWKd/b%^:[[:mVMT-XIz祶ej fJ Bv+I5wP:رP`vg§yCjGHF b(qT)g7DAZ, HI0;Q'H'ՔWS&UWgf H:+5}Ш}Jdh*ˎ3sVw('zpQL\\Eqe*JO1ObA`Ž'*&+wR|🙣Tꋁ+RsI -㳫k;q#6n'%>0"[9C@ZyènȌp,:??k_q.cQ AlG%%Ny"g&e g) Ԧ6 J^p{^6iAu9FX)> w_=f,Նo6t>O:i6r-ޖ w^8Ks9˩Qg KH d s8`"SDģ= h0҅6<;K? S,)!֥/% _r>U>QcpX7\P#!qyz=c?;ŕi>H)& TS 4i,yYk#%bjq}0Z܅ Ue:T_7x7Ž$,4nPlãf%~HwADp݋T0fb7z`gᛙ/  )PZƽtiq)=ƜR qR`T:sp$d`5ɸ`;w(Kz<>\n^JaR w ᢃu+¥&LI8FҎY | K3gTZDR_%z LE7Vtxx (R^b O\9W͗uN`7 ^LĕKs?A v/LiQu%2N1_r'SWVUĈ [w={eD3EgQY9! h\N#5! xJA@tGRU&w9XÅk"[\uHiTD2'303,SqXŒ&B wnX((y̎PC'kGI9S-H:4wmFRN8 XmjϘ '14ՙH|E4~-8,h$QܒLfH'C>2.<,$nw _fѩ ]+ECRn ?4(;J Rd*$jKA)e<\U+^B_{?@0hC(XKzq(O(W!=qcԢ0(âwlgWĐ*Hf=|۬N{r1jW/Qjr{ϋ#f~ iBc`SX+%*CҦIxrik]qtҝ]xY%gk-| ~[-˾8H@?L#_v`ѹ0xTSkb|}M!xs&aQyG:{-dsy#7պiVH~H~~~1z嫗ËQkp'YN኉_I9bǫA*^BRMa>s @cP5Kɨ[8}]PA- _zvz.L}뗯{}?o}1Q_Ƿ߽+IyH&*SeW~KOܨ_/_`MU ͫըZ&[=:&o PX0wQCFVJ>,Lsjcj#$rv`$9;Z`FhR< hn-QV>CByf=1';spI!:f]|Xhᥣ|:z[ޙ u`@׵610ؒ[@ 3Z`h .5kN#u0T".gnkhL'V| <~yoS|)g̈G%wnH(NQ￁3¼Vu7ЯQ] Ifnlw8=)Ԛ(Yz`Mz֢zeH,e^D' p Qn"[-W#نsXФWK{U5Q'̑T/6`^Rv\z 8h`as;&)s!#t2p;{;NΙ!=.`4~0n:F̭a)5DjnZE7Gq&]/֖<ԛtXfI~f}،摥vhHX?w4P0v,Q$#7t ﺡh }_ I'aL๸~`$Y}:9xi˱e`iDv, H3O7Yt65%ܘ? 8Ljrqnނf` O(9+c,]aа\svP~-B*T i2ZZKR=b*qojq{o4ʖ4DL9-RMA6ƒe ,'ҿKE{EDݲ(ǭ1ɈA8|gKD(pi9 ekb&5ocB)7j<82Zg eZ)2bь4ojLMQ<|pMi9!e9wq3q;|,ED}$+;>~cxUTr"3,e)czi2k>Q{9y n\c< 2!"Y&,@E@h1: [dPBk qHbO, !=6EVo&ly?4:']q٬>  7RNPv|(#QL(PXUXraI}A(e%)MYijO?<7` BbG¤ !f*'kZI,[Ng++c.׌u%T#e( VMCvZ&,:q͋v܎~;.Ne #8)xu::mNsi Jf65BR-:?{FdO@:aŗ.,fpgYXrA[-YLٲLtݬ9d=lr"ܲbs߈{Iz?^ˬ\zc'z# U+㴞 {=I}NMg'HR]7, tѳlΥhu\-`Rн)P !"ɧ)ۚ30W#1NgS9rؕv5{{jAcP﹪5^\_kbÆjlldnWz.&Wi:-ve{!kcKʿvvrh3R6D9wefsץOmrhQ+dG([j<%ەuM5N< YIZ+ eV\)JQdhHwʕ,o+H'-cs͉Y)-W;u-qbCA!T)j@0QK PT<%>Η;*.$6svβ44 #rHVyQ'^si|ipgo40"ьHFʤihTET)ιIRu8F+ˀr%`KREr&BYd&Q'o;!9 67懾.!aӳp6:w9b-z.Ox~6d΋+׃/ﮥptP9H2;u;mv W-Zb;8y%4VзoAǝKQH&m]ʆ`._!_.`{] Aо5oYQ|Tܬ;?X-ns8ͽSdK7IbS՜kx`xXҬyA]{3jwSޮb'ab&mՆpwö{KKdQcu_w;hȫpa=NiNg<(2rM< A$U:ѓ#/Sw`ke9rDp̫9& 9:1'^WF`J]ml왛SV)".鷳ganRYV@"ߟV;F5o KzeDHY>{ ;&+$2RwOY Pdb }G.{\)<1&lPBg2QtJ.%%37x6yo1' rYUUVgǠ&Po.b:3  Q$@jܠAs"t9"gY['l]2 D" i@!NNof*g /L]ɰl*댼 ʵBre{&sl.O=eQpx\9.T.ƶFm7Mkj~WzZ&m㋯Ӌ9>E9Xz8+I#u&K"e8%P[ݱ;WN57ػr62n{Ӹٶ68 ? Z‡3ƸMnwD$Ǩ˅gB\Ϣc \h&vyj%MF;$fev^{"Qaa<[Ǎ1:hkȣJF_ DҰ\c*'h[Ѫͣh] g*L%o]4*P&'UbZ'\2@ZrJk{u7妽8Q46s, w2)<@ n#SNkꍲVG!ǽ%Ɛ p:W*tKK@s_V/;̉E^hINk-`FD& M5OVH'bx缶' ]vۤDC-UZ9cA%gDnDd[Fu/u"eY*x1솁 )v kҚk(5g_әP6y7=aNI)ıHWK&G8|LI2PJ-L(0qAkOsn r{ cZt''F0P ;4 9PQ@ RVVC{zO Ni̗ou wî[6eֻb:@[i}Ϝc:ޅOGY(Axi*!:Jm$y`RD@3m CerYP Xԕ:Q?2ELJbLz7Iׅ:n ^*g}t<Rdڶ\ȷfD ^kؙ܋\3ZEɶ^P2^|F ajJ-?|⥭ FP] MX)x* $A[ |x-ֶߒ4|HxYN * IA%oGQI'o UgC6$ kJ'Ee4d?!OQCD2XグB+,H pg5_ yW/'zxK7<ց jY}l{t"Zw]U^`py!WFL4s@ \»bիnprX͍k-#xB\y(92~PW 86ȒҒCD7T3M-Wg@Ic;nyqYu}Thů袰H|*e{}k`8蟔#|]Xv-P./[MLtUMU W+EpmGBpfO'"*I   G.Pvŵ)w I䜻RX!9M6@SC[؀m]1JmnPJo B%aW\3 9dKU]=Cv9GptǮ2o;P]=GvUFv]evbWBֳ \eڞ]:+.='SE0#Om{A{vu?(.vxϮ]zJC*,ΰ ®ZTADϮ%Z̔I^vVF73K;g |vË=g< 88(gc9+?>;5aK!h*Et8 6MӚDB|ύ,p$"֟;gvc}It:N2b/9ٖVk4Z2o|K8+k]Qs 촦R`VvT)DSPRi) ҅*a}%䤶+9-I'jQTr26TD1(jiaI`>wܴ8u6*#–eYxM^nPQ i0UYE{,%T@}d$qt4[1v>xaض'g]x 2$OjWIi*HڠjT!EDLJZVw_z GbuZEq*OW*H(E9JgHtEu pPJFzm[4Oǭ#z`iX2VXT0i`6%gHDY#'M iJIiQZT@ ^D\p@$"DT| &DST"?Y^)Ir$fZߵyn_H%e内Z볋PbA6eKI**o("')K';&21/fø65LZ*K[iydy`X(T,D VrOGpUiQ4Y>b%g/ΡoX&$ksy)&C"p^OzlÄF-/ьkn @E]S kLH*~:)%)K9sNur[2wV&;F٠;wy\ʩ\welRN˫Ed ޹2+P%EfVFG'KV.ERjt<[ut8,sCz'/^ δ<@V_E"ur)5mZJ0ˣ>i)Ui*dA\-).V7WiyP1taftޗwOCX|ٻ6$WT;32j1 <`4K"LnRڽCyJZ\,ˬdTUƗ_D79>/?'&ReC4#1Eu8SKgFūFy!ݎFyߑGj uڰ.(wxikJ}mј hOUh 78:$@:Qg߿@B}=)"W>A?h*@"R49xt1 p$EINER koRZKPJSTrE`4 㲎 !X XJܶf7=FWZ~%n+Þwy{usW/.u(gK{߼ y/A Y`)ŋ`XŅSpJ+V)*~ )m=M5,a\_p\ަk`) @ ױtPu_K%$K%ev|ܣly6>KFkBcRL%J%9`o< Ra0Z>' Ոo/]ryx Wf( b [-d^wm"~&XМb8otWr܀^+3dPH]Iʡ)z˳+|ʽ%K{z}1[IKAX%i] ֠}ګL$".i4A76gLe 8႓>gBBg"RHU XU# `f4-"f*d)`(K V*|Hx6 bq\w*}6X{9NFa4~ǞޞFTv9=:糷Y\mPգ3c[7=ӏ.gkUW:q 4>Kc$)6_6u30_.%Q~?\Y{O@kj>g2*ՍGl=Mz`Kv YQtug/h%E/.oG Η'ţ~UowluӨ򞥿4r|"rj7NӮEfeST͚SY9VɈR-ytǣ믣L(쾛wO|uF^t|s5π|X /N3bW zGZzi.5-Ucp&q޶%:T}?i{&O6ދ{(-' b񲛑㝝8pyks"ɹ2[ܷcCОNNN&erDR@et,`u* 'Sʐ}4o.ܗ;v[ zo'$7OV"FHmICŋ.$&$"QڔJBzR>[d*jFa=b}Ұs= +e8NZbcEm&~E͗>ii=;Cf YI Dws}EW4Ex5i))  c#@N5 ̨r0HFfGv\6[Gaa*!TXmʌ[<=4x<ׯ1!sQ6 Vr)229WrQ`2S&-ƙNP=J T¦26Nm'/>gDEDZg?b$31cQ[5Fm5`wxKVd1h.PX^\b"|F:v-$FV㱚aR L-dfp*[ƚY cUT4+#5fި_kx.l}q@w " 96b#HSoeەTU""jk{wCQI_؄T"3!0lUԮBVYֵyo2?8_E.*9Mc\.k ʻ`sXjD9{ZcRY0 p9\<. 6[mGw*qWLяVj{ճyg4{|,d&uu%L?@8="/TO/3=JxBͪx7 N[D1/<_]|@*E0T҅0 mqRepDE/-hVYixqn+\-I˅0AruR I[2e<R@G +`**!'6.c5"boF|>w%dvy&)W|Oo|z>vh->՞T_1M.Kv޻v{INʝw!έ:"U˶RKtPNsAfSF1?ёIQjTjHh f&%2YkdSG3&DJ׵K;AM"=ʐR N*N;T:P "Rr*йF $f4(`<naw7zK mfumF9uuJ-v$k `Ψ-,-Z(Q'ES!SDQt+A $%vRA֦ %NK0.+}.x!<6Ac@2VIF5[7y3P2*yk*4VNkc-_:Q\:,xڃ~y)':$ge%d~] %Z ƐKh(E٤mF)af=~j4xгV yG ut"ǻ j8kبm0 ޜ-(0cY:EekP&F(ڡpl1!&Uc=k&΁z@:ܵPХ ba L^G)x $m %&'';PI~P*pZg^r,VW"B""D!`rM02)R@L|jFoL+k NyxJ2[&o0ćd.:)Hp4z]lMek JZ'Q̽țZLAW|KpiE5ՠm<",YnjCmlIQLB~!m-(%B JTT("4G;Xl*-:AkS4x{7!J '5d0ICCl3-nib[GņU+u}z~q呟j/X=ыE>l2_L//gTh״6i}%W{J lTo}c6[!LtX;/K:eIgT Q+uF3UJ,l<8^b'=ڷOrI%IG>ePzLNU\6%-4"rN(0$B,*d6R:r&yMquWVA>+ڞ.z?+V3k|&,+bt]Avg~q0ɘCp'跹z(iDżoZ1H\Ejb d}R*tH0GJQ#SD;00#s SGzgS\2 'P4dmU:fɯNyp*_ܙ޼+S!_TO,U\\cr|{s{h=ųdz;C{Oܼ쾬zwoq]ѵ}ymۺ> \oz&e9ӚL{n5o`gqӀ\t:DY:̰BRvr WbC#;mf{[O8ǧ]ʈ}IҒ"A5L?l."B `e0d&wdRۀJQb[PHhU&bdҲ~GaK9׫qE>e~Aӭ/C_JKs)8 HSAp 2!֨'UI΀L1I"Z>%` 0[ˆd l4Y育V:+{/ -%(#dіXQX)aYdktObT`4]6H]BIC3qxYR`39o+fL>^}5m{% v\fLnU+ .0$")t{l j`R+ yK~L9x4@V!GU2Z1[[~)H֗nU6 Ot⸮G 8>3r|CJ11Q,uk P@UA6"x%`Ai6h zfZX?L&J+CjEg-C%C"RڲJU.d#ī/m.&0wۯl F#bЊӥ6ܩ5ff.'M"F΁JJ('%$1Q J RJgkkn8KO bG+N^rT#R?=AKqAvbg/_EֿtF֥=dM EiWdkyZl~|G?ǯ,oߍKtAN;Q+}|v̗^LX6X)ȋšԔxvr8?.>'",|7Igc_+O^f/+*k2>#[zJegӔjjNv$o>fܒ:IR-V}5ZPUqE~bqh{_;/j_4똠 CA<|z~RìW#[##O^j~%,?xՖWͯ[{ڕ6 O%_Wg'uojJ*4zצMR:549aN{̿ƣ"اM=o,1b'PކvtH5ox~׽ ]y:i[ղ?e5 &Lo;M4e/5rhp~y]GbnP6d6i M~}{k&ٸ$TKNUA͓Чm*Ɠbn?-ڕ jpL{UwQcE1?FΎF?GF}#?JNx0YMiz[;6-OSwf_ٖ³kKϧ^$t]zw&-~Ӎt.|bd\-ʛm콬f|Mvw3nޢr;Uʞk%M&-<2No6< +8*Q1"e1t>P2db"Fلfx?4g //x^vׁ2k1)/ ʢRREc9` JEUC3Ǝ%M%#J< \ٸ:pjE E+h}rnvڏ MN7(j l\AkP/ڏN2DywU`"*E\àH Ahcv A9gFlg*iXP'WEajFO zm}ϘIrvNy:`gTrrRg9ۡ"nkjjǻڋfx(0ʔkE^&IABڌX謐w39 Q!JY˫:8!#6SV[!Y/K"A{" nC GRL&x> 'O}S28CJQ& {/9fϐ}#ekQXD'0@^f|YKapUtrJ4{guIoq/s^m7hzQe 6Y7cd7m=,3}&T+):1}DOd":ʱA=njRA  ''Z^E>tDPdFw(r,Rń48Fk=TԦ89Jc2;c jOҤ1!Ơ2>CJcT}* -"bБ9d'BڐK%碉6g鍜 P+4H%L0УL,N:]t!JgAXBXѽ/:1B$iEP{a)]C $ s&jYyM؈|J ]-W+^Ho cUz!KgiIݳlU1vm"5?)~tdpB54lny,#<> ,|>P/: PI=*iŀ$HsR)*OX %M9gl3$T_GɐIC^8$ FPhol^~m.Npe<HqL<2xvWo c>n`u3_^1[{tBƫ29_KG7 0555g!MQV˹r.EChuCR "2"TFe7(q^I=t[z_!ZJ{*vU%׫}QWL~R ⠮^rZݧz|LXݟz|Zm*5ը+zU%تQW\F`R 7 /G]=v8t -2BH*&j$+RǼvc&[UcZ?׫p-I͙rfUN9*"/Y*FZJ ɛ6(H-Okx(#dzImnJ1&4bBRd&YIl,{If-N)߃а02P273,&!RR`D"D/1 b=pdf>JiÐe%DMcdᬣ(U4GZ"Z|Z2j'؈Y_k#Xʈzs# JUVPUHXc$9SMW[(䊗Y:˾Qһ>aEcQٵ4z'}%]K,Sի]:F@$KNFCP’Wڡ,TM>JKYc,2 XЅ'יB&Du%Ŧ%Yn{0d^>ˈFV^o"@)$l`" #yKR|fD'3c ֢`rsh,|0C7#gxmlA0c^50թ9V2<S8&bf*B{a,-dUTS , ـ uJ )D3e]d+b?Xa,~yTB#"Pb"s@l*l ģDR1n"J;'L,&i3bN 2ec ݄.mm?9v\|IJb}d Brl곐$4k=epI'|?{׶ƕe b = `L:/= sȢ;Yx"eȖMUVYta2,ɾ^RxDdK wb ͸yZ !hY#3uh*nF)K䄪G[-ky:ATѵPI88%V4U8] Nz- ] &W$ZʍUzr Lj  (ЃcF? $3-+L^PeV** &VY}]) /.D,_ x [1 -xX`u+*@DUԝfE >-`9%^VB0NFO+<)}vֆ ԝÜLm]LՐZEQ}+pmpc, {%!z=M} +b6 p /룫8*Ge R{$X8Cڜ]1g,HZnkT/8@Zie#s4Hq֮K;,@tm39(kJI>2!~2w1`DhJH`uaE^Y` GE*DYl956]۞Jgi&0 ؞xfB޼jPR\[o"]`^au039k.^ |^ΥM k7t.Yцu0q:X mlr67+aM/˶L(,Ե!nj!fgTE0 ߏ ,E1_ kHk sQS9# Byqp䯧φh߼+"9,H]S.Q2aR; # T") wB)WNFu&it E+Cp xrpL9v#E^g<2apaBlw(h(cx)U,J$kRz4 .C1#dnІEy819EKkL4Rkw 58VשT gF x{GO D2),3v-c|1^r5WhW ƛZAKqGhL-Gx:X@a}˙c>1da 5W iD8|%#q<={޳q+W 8 l0b0~ @loAN_eE锆F'd}Dmhǵ-OkPٶG4>i ➒ps(VHEcW!J )B%ȭ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJW $vom)h+kv9[#⫰X#7j3xv5˽v?NjȒ3dR"Wdbnhf3/m>q&ǶrOIy^[ > !Z=!ʽGy^u_&v9cJ./7m<ϫ~=) idjw #2&o]d1s"$s󻋵z}lurV|{ZMao-]]_H~~c;_F??p@?f^.[GuL%x~ڗof}w̷qϧvsWi<{DŽgow#)HqG;Rܑw#)HqG;Rܑw#)HqG;Rܑw#)HqG;Rܑw#)HqG;Rܑw#)bŝOI!\a jEhX}"4 !sN֯k-< 6 17ӫ }cya,* a7kofyoжBV :qC]ںt!0]rJH%vgɧF}׏䛋r\bV,pysq1bB}j7S҇[.!çWD?g,qhw~bOS,f(QL..l`cLd.g8{' n䄮aGvo._iyZjq}!̿F_Zu|mҪͨw]?zsy ܳ{a\L~ͫ?U{p}hj;[ܴ{TBJ1qlV?Αd9{]wDz]H`Qvz-7[o9~j;:D  sXʲƵ \8%t1JUI,ԖVZ/֪`je< [8 h԰.!.WoY#=TWv.Oʛ+:n4/篮J|Ǜvl)o泩VZ{o=쬕;rz$.Sc&o<ѮɌ5iI]@7+2L 8G:vZc֮쎝[8;c?ҭvˌ;B%޳nGz6ۮo2>;Ͷy8eXNC39bZwwlMZkJ6ZmzJ Qt#jɬX2*C-&ٵcK'ahd/ddB6IU ζ1P(Rj[ܹcw tǎ˼G1KwڲsזԵ xXβ-}ӥ%VT>jexR–d]aƆ2!ʐ$E&p8GsM%\!t9GR?]Y.1>-?U玨#RGN>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>N>_ՖO{#\람G($1CofɫrǛ7v0@,b̓”e'^mWi"0@/]DH}?Ig7c3ZËgw1v_%wQ;q3|4 ڧ=NcAav[ԩ|76[~:7YfߔeF<( ֶV12W/Љ [xeJVt*B .Ý䉧#Ϡ?i?48h3XMfdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf@fdf|~ri<{e[j'[\^Pݩ\6Pcm=S^@?;ΞU{/ {[mtqկ^ >{O;l>F Q& | >=9mp6twpd.w] CD pX=3zEeSǞ6/]t.i7( "(TDSe-SwӏՅ3cO=Fo}O/=?>N&xXmۏ .fl  hhg/X]DZ^vu+I^QDA]VMRrY`WE@إoYq @faݧˉ%Jp"Y*>7PZ%Oj/nzwOwqہ?mqqS">?Bs"]܉Q䄗BmsP:}UnV ĨޏOƨyZ3 ylqQ罏oim޾Wkz[w<6;ژ\_\!xV iUrXZoM.mQU}+wx:`3#8c3%d[ +mh:Oڈ/Y&m+lxZg=9[8lLooZM DDUx.Ek^mkgHN~Ǚo˾cEP ,_J-ga,kX +`ISDó2'/V7/S3I=eZZytXk)DGf021Rr ,-^P:KqtivYx˶4LeZXcȫ1Bj +xc|SSV&RWGހ/v(|,-E[ݢ< )3np~K!B[ NQ5'Y<$*g0< i~TG kuƋ76XS8;&øɰKX2d |&pu_l9)ۥ0hGS%VHd?{ȭO y-%,`sٳ&X$=ZےFg29EGnٲ~TUXU_a^ 2R0S3>.yfzT>ʣl7m JC*9++DC.tmR;ډhMЮBy8*p"ɸWk㟐ri;s!/OM;8MKgqϭiٶqq3#4ēb+66l$NEG \Ʌ"ul `JSK$RƸ=w6Vڅf;(ww]/>yomT&bzWhY}C\|ZxdC .x\]1?rC: ork7Ct4<݂vƽ_?.}<;v˽{Zͯ݃kb#gBZE;8U`g^Lc*/1K;8UJ8o1Vhٹc84j.߶k(??u FA 6Ҙa7SSjaW$~+S#\ƥ\MWEKo8(|˻ Xi bV`l*UGљBN)MBn􍄖m"Ff$ۦR?Άnw9`qr=[dr:O3FWz4U7KGlѧ 30+7mOZg2~Vx[6v wIH~cg~'?+BqJ7Ggc'sC撕go13m,•\_5Hp-%F{y}t9&ᧅ5^ ܳH4֎69nb{}xLz.XU>/[~h/~=7ϝn&k{HE5s˕g>Wa9BHYC)(A&,% `:GT ,H :[WϬERRAvЕMRF!)'g=7*+S^'F@*'Aln'v'>Ohm7~{sRmp;7jxF Z[b"6o#[X_hY,>Jx&މu«<Q7a+% ξJhQ-sΌJX(N&N*#&BTBD_]h>%'K.cLoUCM o\Y&V*aT0Td2eY" PFzeهٽ:1B$ɈFtсQ10YSc(,Ld4xJuj|aiM7/ cu~!+g#pv1r>c§\H6#}cP4܀P'ЗŅ\5>?P2,>6 \H@wdkd@!,SF**|HX4%-9U`l[Fq]$ CLMeC DNK3q h.'K`v= $owSe2̶<5=}4 Wy`u3/X1=^}&jժszt1Ԝq&$` 5f +e4(Rj2@ZrY)ʃ;I' ,{ _>I:h!F`V%kw)($@0eVD&tBVt5d:Y_;qلY') <T`T"*+]YӾiJXSEJكqx JQB'b%H(_Tl$_ %h>e&G`',rsVvFr\q^4\w#ˌ pߪi>ɣzF**,TLM%\l2З\3CͻVW;ڞ8oG߫k۾x'rx!j$$_ ƟhifK2,b?0 lnGژKaKx%wy?`/N]ZqFKHRQ`M݃%FaͮZ_zz6*8CͰQ>͞g 6zZS־9ឧcM=}'颩 fV;1.kEO<Ų_ο-pA9 ׅM6jfXZ#aXҵ?/ɌOdqۖhʹ%_QWbƙ8 ¼K&e!-fn- _CQzBV5J|֕64jh1kl8]Mw޾׷统??~o}}xo>gZ JeBanae+¿@ ~{F5Z:CזүnS~~_2+,\ڄ8&e>zm'v&'M//r:#P<`cJYb{YH^13Q`'=vaa\z|bСlYhɗ4[GZ(YB^ʋkl5Y cʾ$MBژACyT\7J/Sê>uiRwZzАUIil -78>^\~^ )֣II`_N n<_,dB`.S60Z rw ֐=ϩfpY-$26BN e,pb RBqaPnby"{U꼸HqXp<^XpAZYl nl>v5S^n"3V&ouWYѰaR1,lp\&tQ#_Cldo|k; QI)Q5.~t4FCrؚ/wW77Os4-$XCi ҝYGnz?|s[;rP* 6λy2Ȼ1볯nix^niXrh._ܺiyh'?hcYJ=QMZli[z5&y?Xes{\Y^ެ[d#Hӯ||r9OtvkyE/8Ϳ+.*f¾敓V)o}&5P0(˯ff/sQ<5pJkr2`IZ&iH44e1TN{rPќC?(&MI94qҰZ~GwWgq ,_/e!JJ+`*Rs B){ЛBo7nj\eryʕ *=TYh&a2LKe(VLMfcMfcVg6)-3_`feN8sDJ*ju,HkUU\@sU"1WE\c\iAv\{tp 8sUp,H:oHJKz6J+V]K`+;i+}s{sB{ ,<sU5XUve[护+z9Jm"GcJ5{UDo^GdH`Gc*Ҫ;EJ;/\)ISԿ"q5cb=7WEJzsNqT+X1WE\}4"S KJzge+$t)f_3ZW:h>3S Sju^nusV"MǤׯ[(V֣4̓j,8LK ^%wיAhLTX!T!̠UR0uVq`o֎Ciyo/edr̸J%+Ù tπwPXocVۄMj:e蕡Y#C΃i[ zMXr8-"V;-13ϭZȨ8Y[g;0D2"ޛssUtݑ&%Y[/P?|،)n1姭7!H3x$.9FP(T+`0,&!'o+wΛҥk'kSS=WoBrǪebJ{ށo{fMBBo,̿ه`GgyEKoUc(9 .CDmqD*!hNL(6r%2ǾpG3(кZ1y5_tzv+mPSF֡^e5& Rg1b hyN _(K -[8.ĢX)mF RĬ#Ao@ݒryUd5sKs̃/ۊII1C.X2$ *ɈHK ϶cc6eWD{گތ֡y.҂|{i(%/bYE^$O-ä< Coir dDe#wp&9&Б!d.rkHIög g=y\J&'pyhW_tS5һ}Yyf,Ƴ42=No' X}܂ PGPUʱr,8 Mq`qѡ鳆ąLvم$xrfdrfd'\H ѣd&Y&kЄ&Jqr ݓQa[& EN5/>*T:zr *4K2mMtn;9ސszq+)vy/4Gw7Wc؃arhf5 bzDxUZ-$FOHj|th8L2J1GUdY#OΩ%D'D;E$D+Vy">dM0e<2S¹2aLx32D̄ȵ;ygALH)(Q\xzT^pUP9x|܄ 8BqMnuh`]Re@x9Z1fw,IEtggZ s $Mx:|PNs o#@̣ g;Q| 3Â'CL>Yg[G} p#Z`V%1gL4T e `7 @E.xUF ['?A9 bxFK"@pqU3O4ElƳ$Ug(M>0D98;9E`DMT^qHj/ʭiKowvbl,Q yz-(r y[!<0(0H,}h23<.ygπ,msu+)ƥ Aմ IFu>g-(OJ8'+WX=:@c].Vt ?h̢$2R$޵#ҸvmQ|d3-6eO[HadmI;Hbd*,V)´$@uNu\i2Q;BPƹIB81 LP/ ψO9BmSy:#g^m-\jƎ̋-sgn쑣>7-p*E>}ОCPMS,wT~ CႩĢTB^J] &gC18>քJ KIF@K#h4&E)!sGGvH\;A`|Z3*Ipo3y~slt|y b>\<;UT4% %Ԅ2D9! ùµ-Ko کH^ ?d@p"IܸnmD(&ijb)w[ )K$7Vj[#Tu9;nLοmsǨ:pD)'`RĠ/ [Jw[Evp経}5pʏ.Ow5K(-<.dt9.'4>Csũ8w({yq:!~FȀdRq_Hiڇ :D~{3n4'lXHJҒbVS )>Z]_Nv|'bjgpfV"Y -D1H C戼$9959l?3rtU+D-;Bd_-DAll 9js47PoZ\O҄YC2rT@ dZߵFgl%<, sLqbVW-iuU`;c.&/Zt(tKܛ6K{c']vp #K‘Уpd*p|Q8pP+i4o \͏Q,)#Ui£1dE`XV[1כzK&eJ2X敤!0M9Q,_MRȴ$$}ı m~F-rF{ν"DT2i0UYji)!Rf!$C[dbl|̭e{~u& gJFɃLWIi 6 *{F- B\d P obq&d8*6(^Xwa½RA7H(QyC#%%&DQj&5>JV0ҳ1"{B5Piqp8t ?X2,3-M(0!%aQzAVkvѢ$!悴I!똌IHe5`B%[XRiwBfch.l10N* 䝀8-F=8&)f[PɅOä[ 'OBkCkN[6$oh =d rt$#;s7>LQ š)gkI䍣H'r\-Xq@C}S~\i&RyG6Pآ 񺸰"7OjM2쫒"3*#pUgG~Ru]-6^ZqKHXBs@74+]4i~bpiki۷[fXk3whYޡ Xjo̮|'69F6:d[-}UuYZ#_(ܴU`Ҥ/ORt1;*aJϛM*ZTM9iKnM2VME>N;G5w+V*s{9Mey\ ~݇7Mݷo?|߽e}%FŠKDxi^{jۛ75д"]z6໴+rKjUV!1~@gtųbm:= _DE9DVD; x@HpNz r)k7<91V )ihKJFf` 3qj-NP,s.:LPquk@|އk`Ui,2>)¬VMGy gTNSI9*J$kMg-}@. <0wYt;L0h&L0hd8I" c8^6N;J s)/RJ6 INy4.jg1)mX8EFuEv8u#@;q}t*Ab~^*[.E^&r(t )YkHydÜ<&Azt,%P_zLrw+< X2J"` c^45Z1/x.#P!g8PdH5(dS02KYN:Ft۷,gܮݾb6)PDHh4ļ`dw sR*%Mq(Ӌ@"pLQ (ctX(:EĹIDAXg1txYZ= j؟v>,\)Gu}ipUTuFvhw'U]@ 5/8і\$Ǹ>uqt: } |] vأgBGfV8 \!C%q`X!2sW51RHemS @!Ȩ=?la-&ŧkkiI>wPֆЯ'j?xKn1jF³dhUyjb6Yy+BQQ|ւ-ojjٔʴ'g5qw6ݷg95WdK^yɽuQpmFւVyۂVb#BZڀ|A5B ɥWrq3+)_Oxu2K lAG֍ؗ4(-M?ƠDf҂bL:Rh!#mUћѐ=04β4֋ew#{wc8pM/rE^_tS9 R>‘z Z& !JP43rZd4@@-5£hˇvW _bZCTJ#(SW*IA$I eQP\䂭\pXR+#O?-9 VZw`r۶POY/-+GH.g$xS$Iaɱ{THu 1 W<*xzzӪzziEqR)Fdm/IV!dGh͓\ztGq?xMIzHR`(6Fr vq͢$.aJOr:K}gRI9&h]G!mD um v`{^BgZL`2K7c$ nIs)34')oA~i"jxkJY +AWtӋv֍Ɇ|u73f\?9ŗ7q) ?md:`'R?^JZyl؏̿Tβ[5W]k#A$5Ԃs*n%p:/Q,>+.sy5~:3oBLr8}UWu]Vyj ;)0wr׎mq8"3bۼ{ǛV>aJy1U<*-{XSLK+ɷ!oP*L[9J:j1i<~LLy.<5Ɋ-7-|͖]um[[v-gX؀j6`V>"a|Sկz7U=(ni|gͻ[ERr˂H~w^z{ZZG_}uoAm_&[]_1jmZby?Td8MV| ^ş꟎Y,W_1}FCo.?W\fbWғ[~k'ԝqNqapP?DIlt<0A )Ac|+כ,vzN7;95ܦ@ct1h~IrZh}\v,3}=-М*@3Q̥8,o #v u h='ԣ1`rmxڶ.5ކ>yjZ-몮hb!u][غmN@r٦Ѓ[JJR}lw4#7>Ψ}2nU?4Lpdյ@mjovB-ClBjܒ8A@t<;k h:& h׮C0pcY@qaiw/)۬YOgokTL73'3YK/`[:^OeV~z&Z Z2>@G7UP4Hvq mJQSwU]*~VNOi|LdƶTmYRcq48^2 G'0(ђ‹yD97\&?]hvlyzW7/" #5W}M((ƾkML&U rB3nBmO5HWhl]EѦ?O$ =v&)u(]oG6h8OmR\=Z8vdkb>~.Eub=V-2@bkN[>USsD!sdȷ!t~c#uCQn{7#']ƮQ9>\'U"\8Z8Ո4Sހ9m|/GYO rʹMmd"2v}SqspHy# ҶrRul7%gHVa:و(Kjb <|ĦF3@=nQ;~fc>\jmvG>U/=vULM_eK]RV*ߓ_9΄ʲK,{L׀*zy=P-[iё |tT/V 0U(-.Dٽ븉L+}WuMm5$mrU-[ZN.I*cϛDywμ]~̻1cmIa&Ja,0U ~ۓ*d3}pwELSdjm;$_Ct1uUqީ`bSogy#uu(.IDICmOY̟7d^3^--Эrގnθ?xo^l(y7/?Uy#ǫpqᄏYN}sߝ3~?y1]v7>7@AŚd)Ӽ^l]iɋMm?XXwZݡۗ~5~Ŀ͗kj7w¿{tn?n~oͯi=A]C~<]m{NY{ ~G__-7.޽_#__㣦G62Nnu5J4Z?zFW(#+?@WԦY>Ht5\gJh]WL]PW{"]1p4NDWB{L|] eZEb`Q7-bZg1w] wEW3(oJp#jFsוPEEWѕ 3dZobmW^%߾^|7\j;ӉKN:vſWmE+ng WdϹnLon^/ԬbUݶ&Zn7w¯7EIzy{ziok~_/KMؼq1܁Ww݅lm.\Zl/\\4,X|"FɯjkUg;#ͿP w߼z^R2UmFU6HjYO-{]ߑr%5&-kWeNw:t 1eČܝi#܃a3 }4 52p=3#g݉D(c稫тFAJh]WBYt5K]ѴWѕ^zYxbZҙ2;G]hJ8\=bZ 'B:J‘MLBt'3 ҕ&]QW69G0L weS+{cSZql%KyW"cȗ49n1jNKs^vۡ]#68hzcK]hĈRNw7(e@AM+z^ %}AoFӐ{=C]1-\ u%]E] Zt%SU<D `fɪeL&Y J(,rtlxe1L `KS/'>(cmAwPtNmz@W+dJp]Ң+1w]1e"St5C]!FOHW ЫѕNzwmt))ue-$Z4ד QOc uEa3À] nZtŴ֤u%X9Qؕ'=cWdPh1dP(mI穫d^hЅAYGn0Y_ ո(LU'Vμzߊ_>0p xmʒh,ށzu{B(dÌzAP28`8iۨFWKj֝J(Cu)zU@7apDוPVYt8Vth:5ڐ}2(]#>],Lw0a~b] ufer(mf\JEW6=;FWiѕڔ e):5\R] u%]PWs`銁jt%FWBK'B鋮+McW GWjA%]WBjB%{Ue>ґ ו, B d fXeg@7%Tɻ͠\r[Ss\^)Lrɡ;{Z |A^u%ڐPVDr4a+Qτ&EWL`sוP-B"]10ZT+uF6Bf+{"]10ջL mJ(SRt)jZ_7Ϊѕ>YsS/EW`kz0W&A0d'>r .M[s DC)S^cW`EW4=$!(ҕG=DZtŴ וPTt5C]YcHM]e .] u%f+2GM 'ܐi#@J[t5G]9뮆Ijt%.jѕƐrʦ-C.$WZ?Oe -, %k WԻ!b([Ia= aXF%f&`X(-`xpQu%>w .݅jJfƣu'  ѕ"hѕк+,U\RӺ҈u%A29+23 lz2NdL ĵjDGmBS(?G88@^Т+2g&(ҕG=Q] u%f+KƩ[FWDWB;(dp"rP `$=ѕຠEWBO2Nʻ@˰J{;s|/KSi [Þ6#kkW%xpɐXEAO]e S >Gʽ %O띉JQ+Mzf@CJ¢ ]=K] Cu%̀QWf@)\FWB}b-EWU&pɃ1zƮ] m0J(SI]MW;Cӏ] Nqb] x}@Jl+']#ZHW l'x0 IRJ_t5G]a@"]10ѕzEWB0w]1%]PW6PHA~! d] -u%DWshQ+Jpdi@JtEW3ԕU˱ɠq{Gs6GnY_;ִlɼŠ&Xzge[z_)\L h`'ܩa,,a/Ũb`Kzt%^PЦrۄPt, 1QDEb`71LperB;U-AmB(z]ńAxܨu%AͺM)jJ[Д 20=Ԣ+>w] enЕ=C:y] v42anaqAd]EW6=((2Hjt%SHF;UaXt,&ZEq P6uŔ.gD銁E5ܩ'Fcb(u倞su] U+2bueݟy=aוb~쑻iWV@MԀF&6hiI YBpEJw:7ٻ޶W#o10ECxO[,*i4c_}[.UNF Q.\@~(\}w4N Ȁj|ja]*rO Z0n8 @_(I9U49 (}8l\s0@@ɲCWtǥWjO{v}{~h;]ļQt".N;> `gzC+ Bُ]q@t5:pC<2c=u(s+KVCRWd6k.t*(!UP] CW@럼( ]=믗Í2\pMϗgONƈD=ah-gi]\yo޼h_s=iq5"kp/2G]'cY/z9K'')A F0~apdy^#XGk9x oY,O7w`M!~w7*od}n.j){^lGH_Vjm6_#oT+gCe_-,*Dru~L â/||1+-z#Ovǿs>ɕiE[-?+ğ/%Q- =dqM59]M>GV妳x}H,W!-myd;ߠ=oo/P_.Oju<>IFjgnbcrTS餜%Qysmf&L:4j̬1:kUzeqrTkŁUSgU}*#S&շ'aVUE@8nu&̷wɤZ0D3bsRkPTA*VZQ5m0k۫N=hSts6~-6-7ÇSHXݹRSZ+jkPR1E4T9вTb6;$35#bz4*KNIyspKDk3E2Չ!hQJc 0z*K"LEt(ιc0 !}4,4T{K윈=Ea y# A̺"=?m}vJU6Yd3<%cN·),`'̅85ysYU4;ySSIs<JIIbU_qؽs2Jnu3:9D:f7EߢֈSB((E?aAۄE)3ݜ`eˁ AFUK }IX(pֺk3.6>WQU|^Yb@r `ɨ|S랺Vܔ,3sf(R>I 0jA 1 ўF ޹6TuGݎ`6ng2P4\ H`9gbjb |4@sXTM+ Eey0P(g_THcluXBՕ)30I]ZD5=() En4n3*ErTS4Ry)P&%CࠅfN U"1Y hBcF]+czhA [ t;ŸKUUg:E' 7:*2Ik4B.}oG ~2IJ\^r\q< UAd=2ew : mCk! ƛAy@1>o9DqV2)3$]I,UF2bx蘊'x;`qr@ą՜ (DMk [ PkQ4XbѼ<044/! 2'e YN[[ģH܎m ᩛY,TG7?/XE90xaMR}Y+ QHk"mt?_~P_]@,M%!z*#Ⱦq6A@F2=RsMA[AJq}z :8l~|@"T4eP7Exg.Cܖ)%lKRjhNJ@fnqIuflNX-ƹMqt֙tibtd53+ioZP)J]q魪]> 2߬n$F ,/^i6X=USQ%2z3{zx>OrsxuGgEU Pw)n;c`3 --2ա(M TeQiMŌk)DmzIhΓFꉡ057n(G' 3^ AzV{20wCRDlwHj* FDos&I*WRsHw[Pz댆dJ̨g'zs=C7oͰl+TOuXQW TN1֮ }r3pO6r%֓ bY0z@)+ƌLji)7rF3 uH 7]؃ U *c6*R U 6ǀ욁6-ܬhZ+=֬U 6jP%h5:ej)և='е͉zQ=zf|vVH5 o:[Ŭ8ڡb* &XʛvLpiP/zBLf\RčrzGz`lEi8 8؆kW$ЭNE<` \t*Nel) ʝ6KC`PrAůAjҙ\ BG!6MSZ,;wQՒ-* v޵'qu"BQ 97$Vo}u7VV;=v,sy/#0^m9Ï?^{Q[*Wr4`:/U5dل//x3-;JŪshnvI''cG-[/Cz`Aƛ_mo=XޭgG_kïzƶ.;O.v~?_cSS1Ev۟z1c=9^:fh-UX#H]X@wFw(N F=u'@y>zFNQֵ8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ D֛CN87qzN ] '8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q=_'иD ,.@l x _39"AҋH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 l@!9X{s0N7w;Jk @h'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q='[띿 xj:F^߮omwi^l/t鐌KǸqɌKO޸d qKoр\dUe_\ӚW61x%&j[b68 ^~1ŕR1I{{Pr;g2Q/P˷QFk܁\]+xmT!Qm3vI{_28Ǽ\\?>;ã]@ ?hN^zj4%wⲛ}AFD~ 쮖mW2:`XK/4Fog-{ܑG.Uǿ|~Mi׸K: ի>P1\!H F q`\.J"汃n:wh碫w=7(hx5?{gF$BegQtfd0 L/Ƙ}YdݭuݳeK4P%``RKfWgT]PWy+vFPlB5 Hv "] p0jt%Jh#ݘ2zѕ߰L \WY1ځuҰʢM6 zO+uFu%T]PWrdӤ+&kJpAMgPhוPV]RWWq{l+ƤGWuzU-PJJ8Z ._j&Gasy~vt8ݦ\1ꈛ]QW\f9g%4}WDu7kxwimVР3ˆǦM ~Er_J):S0 2zuSuZHCiѝiś'˛"{'}yꋯEZ͛Hle^^p{5m^qE6&;>{k.>YS7_]uGgom4psz䄯/d('/+ciȆ4.ysxG6<iqz$ˣ$&r@֯m9k2ZN,61ݯe( do{-ZNpvrn"i޿ַ1[Vs{i5dÃM淚o{ymi58M=3-,=J5a nCӏ'MdЛɪHbּJ;5ػ&5~ < r3=޿ 6Ykx FIv:7׏A=żC?]qޚջbWsqIz2w {|L [4(LtlP,Xx^p Y~y69F׭9^\L!9Cp9Qٙ$^bp(|Qxⷧ5t%k,F e]6N]%R|yNG ʋ5sRƀ5fpOG)B{$o${/e[hl^?y׻fVEp$KxUE#V|ܪ6w$hjڍDlz5%X|P(ScD%YmK^L,|_W7κV'IF`io%lؖL2Jq嫟߬~[kGԙD2&|cDgK!ʺ- 1U7=$6J~kU?#O(l |ݣuzu<ɻ&_]_vU{9W1y]KjM<}cZ2awͩrQ׊{ݎDME*&E`7*7KaZF/䫇龁ȯ_ b2(|w+ο૿w1^|?ozgizۗWOS=l7}v|?g Z%0|~8൹|7y) w=-D )XC&yM>ph-ޞt0h<49Ya#(K;z.d G:تJp] muŔPZ 847+FW@ZSOj- 6 E- W fuEOͻ'^ \Tɩ_4»"`C8 )44_6h~l1nUEfGsz{] 磫aǙ1q%Y >1ʣ`(*f*V]=uy{8t%FWL+tjJɩU+E(]WB\u%S,<ܤFWru%1T]PWMAFjt%{yJWL+# וP:$^fVW lmP+uNv׭a{] eƩ+Gqoo[l.ώ۔+:)#J@u n/ -b tfPش)ܯpd4JV#*y:7:/N,iko,4&NL<@5уKkFm^D* z3@M+Bz"'荮GWLzt%ٞ>z%Blu}ja`zf@]%@K uEh|ԴlJWDWLmѕP|>JV}DϣM*2F ]VHb2Jpp+] PAgQU]GWGHW ,ѕ.vpy1V]PWλ4EW LJpQ6+ם^u5B]4]10x=ѕFEWB[~g)]oixt%RmM'`TtΣ0ޤ&]cZj-kI!# 7^\~AQb)9X !ؤ)f`pz뽖`Xh/=fJg# CpPVKוbP'rǨbfF]1nԣ+/X1*EPStF׫Y&'w(uEQV*ҕ~&onZt%Gr++ڰifbrn|Rywp]3һƮ(CWTuت&銁 ] .Z-hJוPTu5B](Mjt%Jhw]l{] %ƪʥ(NW Q+QW]QWH&YM{p+h7t] %KR`"Jd)fh4$;L;ݑU6modCT6Z-l 2(7 f@^K+ e5a(YдDI/ZGJ((u% LVשP P]QWd|MqQZW fUff L8fLܔUu5ʤeulkSBR+&JpJh-]WBj ȅ ADWL+jr6)b8fL܁Fe҆+%l'Mѕ&EWL/]WBujcT+jtŸiuWJוPYTO*`ޕO[y}07!R7'Wj-k'##;=SX71tp`hCu{2E0{2q`Xh)=[\|]1pIgܠffDh$SםQWE"]1pz .y-bڝoI^WBkt5F]u;g"Wp!jѕz*]WBjt|te7z;3rz= z*60ʢ umlcbe]YL] -R誮F+)*Ǯp]1mjGz%y"]1;phѕ;B[u5]'$HW &Z(>J UW#ԕw-He7+3*{%.IW#c]Rv`Ll`hb⁷6(R"lQ) WpQ̈́FWz+Tc zCijt%N͐F_1*Fc&=ZCZt%Ζ+ĺdJѥ銁%5\JhSA(uE1AҴY_"AMgPhSbX+ f&s7*6 ̢vT]=mEjtŸ-ZJוPvx^t)pxp^ש+5IHW <<\&ڡFQ_u5B]!a$HW SSJWCtӒIJ(j8$h+X[hD Zj9-zdȈ[ u>@!^i힩r6BT 3pP4+CG;T<`j0<`8cCP+~T%-ZKוP"U]PW8g4JǪ'ҕЖ?3"[u].u' ѕzEWB#וPR܆UfƤTY *6 4CI] ]VJ#ѕ$SbuSɅsM&5ܡȣjg%/V+.ѕz5ѕc-+ Tu5B]!`HWH]1mt] %Ա1CL(] .ѕТ/]WLj?rYK>rQoyf1܆;hKЋ=2*Y6ZEʰܽQ@f1ڼY5  zWhCCB^HWLzvQyXr8,* [&GWլ/I.SPXu5B]Z$MAt%1hSr8>Ru]Wa z>AKpEVWya`e*l.f [ZHW JpѢ+y:u%UW#xCT+MEW +͊Fj^mu.ꉮ]1-Wƨ+ ƪ] nPKS:S;cԕw $0IOpMzWl]%uGVRo!QYEw2M?8f.A(vgxXJwܪyCAȣL#CPŤFWKjtŴLbJocu1N<.[g5#L e'B Tu5B]5-e`g SWh=+ uQuE(5JhSN.;z@97?M[/MGWrzt/r764}e16H~s/ |uxj#…ɫy~ugWGsn]^u-t a4uFir+} q1oV-?3/Wfy:o5>?{WȍOB}tD?x{'13v~ٞPPUy5I=}*R*^":XEĄ-$Ld&61u= Sv$='Iyv&G2ZΧ#'?-N*$?8x^W?>~yHn7[\w]JMk@1g Ҝ,D io͑n>S[Aт_VK|l`+BAz\hc,:%>N;%;& O%`e(ĭ_6 yYlLMu\:y-bd%:*{AX=i)UxXzRT KJzRfj1H-xIU#^=ATp?t{L2w?b׿%gWG< CבdSVSO+EHK"ԓ b3X1D=+(TyэZZXw%8X卑 8)Q=|/@1BU;S(2y Hu(9HnV(UBd÷qq,8)0߆w;$D-V#en\$,O47Éme̳EGo =!t jb_~| \xU ?o3|VQ~^/&̊XPLnS!(M48ҊI*ѨjTCvV\ XiqmJR"(&RΧՓ#91BDC\fj-{!lvk肂o_ūCɶ-es_Y7V*7f.$Ox,f#tkG<-^!7L<؁= _FVE߽oQѕſq3 3]M?&N}n)DvO}5~k\~⃫Lz&抪kL #I3n$vMOˉ-YL%

&Og"aN &bDKRI8sLX%d3 #M JK"1D"cXf"YxgEjo>2w8>N(}fsz7׋{wq|%M]Sd4paX\竎fnY*lUck;TJO=jK uKHlf-=K!Y%_@Bs%a>':i.lUwrG%2BS v\v䢰7:h.-sm1–k%=$ΜO;5H0Ni9+t0Qt,pV⩌5Mx]Wh"y5~M^޿Dn;q7%4 $\ -Ic")Lt&me<KU|+*m ./Oϛ+],|4*屌LSj%H3n$B"a)"YGT&Z3DEꁯ;W[00EXȽfj7=^zČI\ySZĐ#h$50(bb?LWXR*kp%1WQP!.9hԥ'vF*8*=(GG-綷>{bO|n[u^ u?$}B 7,N5fP_.wJ@߹`F|']5A˗v߈-Yz~gN<z^7JrD8n2t\O,`H?oZHiɅI oSjAȪpr8X/bP;`Qɠ{mMgV;keOg~. ǽIw3Vg"2_J@j/6q%}Ri')ȄU*M|}D}]A 'ovkߐ=A 7zz'{%Eœq.ut KR]&#Hk"RJ( aFRmW B)doLQ*ہVbp=w+{BnR=g.ꙢQ-dOg&@xUv|]εgCWsz=W}~G[vGPdJK|~'{IknNO5B!UЋ+iP)R X￾,Vہ 3o7P drlLg9Q\+ow0j_'̫%`}A)@]cXaW&2N(1*҉53H TXd1b'ʶSY"vqIsElcCg ='9h!ZݕMВ?:". lԳB.}T]hmI~5igf}. kշj haZp֠%+hofuzà)ΰ~.{W?0n\yQ`YW&L Qw0t gi`6+!pRdΘʾW7,GPdCpG8k\'I:JrEH.uN۞ݣDg"#~-U:'ei7ˤt 6e3ПwJC~ƭ JPKl Wu #j!j[" >FlO6K)) ,M .3j O8̿͜ rÖ>> 6.RN>+aA k;V(j$2 \Ju$h#ĭl3#<+. lhx,9eB&瀫TPZhy':+葤Y_oK_.QRf䤖q{;\67 G?j蟰J$TK8+D:SE2gp&x$E _e•ge'G <^|eJ"peGZl,wȡ/"֮3~T hȸ,!aYevni=q7K4L9ǏdVV(!*:\bc=\`󜬉(^þ½SJPQ2{S!"!}.X&:|W--0(!" 6 ǵBQN2})k/⛕o#L.JIP|VG3CL΃"Y,ܗe_mhۺz&'Ɣ@^M<2<*XcEG>vB(MJS0I?rrʎߏJ Hr~|g) S%+#Jt*l\ 1n|[9hJG(8V o['v蒫N @DK vQ3RTvTf:O'J'kb`$aݓIrD:81 b4N>~}2]7sӛ<&e0DC@)BIa)bhW?fhlL?w׵{{-DǬ1b1p+B(| Ao\"y9G9jm˕W4~%y0\N95x Dϊ ;TLo ^%"`8pzo5;d{y[eQHPGk@Yr44Y=+tjN5bNu 1mC=/Q( n٫y52Gp#YK=vF! W<ل_\ݐ5-xDXMLBBRħYZu G:\-qBE QKlgzm_icM뢎}Xd,l)uz|q=|*Iew\3["EHQŁTVN)`R >_@ =]81)ȑl?Ɠq[2GIN7H`ٵ[vs[Rs} *m EypP˨duVk' ;$0hd&EGOttx?S%7 :s ,531:;v@^]( ז|`R?80TIk6v;LI'81I:+.t4*,OPhMe@jďD@Gфǃ&1#!JtL6.,) eػy ZRC ;h,]b0V"AkDqWx6.[|ўnE($v5uwWWraPTuQ ˪w6{)N8Ngvo L:z%Ἃ99VN>=N(d(yun!$Bw $?6?NvA>\CM|{T,c4N&hȔZwu*E1椓'^+ V SG:C>*Le|tD%o`ݠIZu6eT6̑Lbp$Pff|B/eTu~? S:U…bw\_=x."d㼞q+zt0-@R! ϧN&_c JHrRg(7ąUym9<. h;ہFyghbT :޶:juSAN {8.G›O5p~r:Q˘uDi>DKQ);ɮEE^_ {N3m+6l@~ PH$;H=zsRҤbםB*7W)X$lfTQiJ5X$;Y%UNe7.*Ya5o&e:}ylZBᾃKӏm - F [|c\J{t_xjo~t>i0{*ЪYAё~<MF ʚ5z+ԁdӢݼfj@)3-B&10ղQ<(REe0Ka4(2j؇*:|K-8Mry>QtZ}<|X&]+qk[0rĮ_eY|OR 0jBДRMLTk͌uY㽢nb ba:hɖN{4]֚ 7Ajy9xӪi$e_KX~{Q[ ® Sʄ0D #;Լl`B\P?#>*[ۇ媳L: ģѼEMƪf'8 ۶Ij_\Csu91;[UXk-W; u`D_f?=|a/Vn/^X( /y=Q{.%gUuE(H~S1ZԬqUt(}ۻfuGuxE%~G6) q`2F?+(|^Ē]%s zRxêNW uABەS V$_4?׾RdБ!E4 լSQqz9:ϟ&&ť Z(( ٔ$+8_:1ޘPC^m5k v6E946/e׈*/my;b4R=U&U5ZI;3b}22^Ab(xRFI#DZf[t}s\GDCsa0[' {/] EBяPnmXWdxTaZ\i^#ll?߾})$dހըaC"%f83?37Kp{̳<;|d=0' !C'[ @2pM^J 0ne+(D h4ONH ;"0XNm;1*s?x\8٢,VŪu FfPO#|[ $7?Eo(eaƎ g`o]* P:4?&N2J٭IԄbO2{Uy4@KЩFxs'h hZpԞ!ƪ魰9 Jee #ٺ ړ8fs4Փe -{.AmCCJR)rt&†|9%ҝt^ܚ4ˣ`9<&څf-uxaj4[T%%RaF3ǂiͅ5 ^Le^+_8O4$&Y=p [т-ЁΔ7HEy^ ͙Rr8e9BQcNp˜{]N_C1hps9V93;ӜoF3GIQ@\$ 4SU8PL/Ռl4.֫_"tD4lV?^^!.@c|[\$Xbr YTQiJ51Xs"6UxqKdޠI,,cFKE$\RY޿&;d$'*BGs4M;-5-΢2%sq#E<d1T~oCS˴4y#¨ZoC7E4R~峒nF!TDgNJ,Rt.0ff7ogŢZcAiV=ܺu۾8 ]: c;1e8)hgLA W홍oDu ΎW*8q(@<Rzt)I)QY Iս&j[$J'n"6#Czp;VnjVZVU0|9ʗ Չx#De UeOyxL) Ɔ%^8wY'E0vʴ!m>ZS$eeҺ)򀻸7O(ލګUyCMtx9gqVɔꇯ3w"9;_jA^gD`O,:I/V{Je0-0x~nx^}.mPK,_}֯&,a>e;H{T1qqclZp„JTkjS;%@%6ED?mZ5gg8uR.lleJ "XׅPM4s(oƃK(. qQb=ג~*rrFhv,AD㒹4ӻGC$B$FK~D#v!0 8G Ę8v8 Ķo{Yn@ a;t2c;’  m<̃v1pܣ>GHxys4#_>{vA"q6j2Oc(:7< h4Npב𔳎4L(Rc ]e@?rwL«綨0t!V\.PR2X62mJAa>3Hnj Tr|{{#{iFeF !{j̕H Y5C-NޯZ]>]&B/tc^( MƉPLI\H7Kuս K3cQ͸*xJbo%+48g;:0G 9(EJ{8:i@:ct᠆|[r\ ςBMZPa7m 4誯Г<(VxmIDGJ546n/$\& TeqsiB:jkw) vԯEC`NFQl#_!jy k 0{v2D"5o$IEUTO %EdD'l/hrW:ȭ C47o#PG%sGO#\ytkk#ioɁZO{v[ Ty)=\8J<8{A.iV»wJMo#6,@n~^"[(Jz}#_$Rv}n|3D5#oY:ŭ_my/RNw?:kgfciSgco.⽠[qbИL7k{`tlCJ0+ YbOm|jS sWbnrq[47g(A۝31AyϜS (LSCeL*tZ}][xx~Ӏ$KF]]z jospᱭ;|z8(MeXJ *F Br*%⨎O7U6}bO7Cv"=qP)1=NNZtI1` T,Cߠ,`/٤YJ*s+NM`n eFyu~u@h!J9SZoKr-L^Tb~iΨ&=|& =3I`V#2M.F>rwcT!"n¿>g98̙a,/u%,MD Q ):DJR&пL 8%+#R"~ͧe6~4q<{FW ˋEJ ~d P( 8 ?3dw hz:b˶DI9/fygʢX!U?QZ o]0^򨳘j):`@Du"}կʣY,z|HXd!`fJ4Hg%c{])ZXo˛ 7Ԇt@:-:-z k= <@JC020Zl !F3x`R@Ey)/;\AA@o#MvEj7ATqav{i̸98ڃTePlP|n5Eŋv?;6ؽkV1:| 5!j7a.͗cmvowvowX]I H-#u N0K5A$T"Ȩci ; Bw!w!B %-K |/i1L1V@lFu㯸"=$7 jqQWМtY`kM0Y6P1mUŹ39ۗnOa b_wgkW97ǒ)|'*f#dNJmꖧ:O ]=;Y{D{+Y^svQvno?-"EAϩsH:G|ws?Ǯ0'?ةORI 0y( "O$D 3NKɽYbI%gyHui\ CZn M!jƒ1:Yk(~1i EK)~M]mQineR &T E4KDQ D[biRCi*T./*Zn߅$WidA*bR HI-r,J͌'@3ϑ'#Ì)ghlj4Vh|3vцQnx:} ӻް9CKRh4#"ZKìNzAD%KGWb#fEWLǶ%5eX{ LÙ├k]!=ͥiY3FhgUns7wミ0xs/$%U'JtnЬǓZϵ׮ʱTW %, Y?^M)Clfcޭ?"Ә!uګӬoSVӏN92'-6WJ"W6o^zؐYxGeI.!@ąMh-.ޯ̑\0lf3lS[Q }$Ϗw4%AT~m!3ᯋTXU0g*Z @t p9B";ݍk@w6 W(1':QYtŕ bvÚ{puqmQg("A#nE:i&e'p;-'/F>Wᆹ 7Ua U#(ԇhƏg:\/^Z{<9uESB7T @#(u2Um/^ӄe~;̳&@- 88'OٻƱWyJ8s ylSeɒ,-)RMIU/(6ns<NUVxcKH0߅$ # ayD"ˬXF>LE:p&d1yvo90>,׎yLj= ]n ӛ^T\>#pBtDpWdo-`_A4N-|tR/a7n418_ ܡnl< ҋcF45Q uRWj|+O3h DRopiZx4P%% Q/XÐvBAkD"oˆ6gih1JK%h~#&wrM(#7Vƚ[Ÿ?z׿0)M _"x?awҧ䟿3`ӯ߿EiܰyW6}'}l?g5;x_U/)-pZD2J&oK]y͸mqQ8A]d '=_]edKCZI7 ՐHs$rӧBk eF#E"5c`XONG_05e#c ;{ri!-~{"䔓ޗ1]2%ᅒHSI[,Y:ے(Ik+i󚇑]="X nmGސFc"u癳uZ$-!$u,xj8 >Ej৞ ˿c*O~|M]DHȡSU"B-oq*l׻U,Os (h$8w+jd\d>;a-9viǾM`'(2?(Ge^-,qC;"´g/Ju#C{ੱ f-s/ÑT*Gv{]VpOeCIЖl>M/ڱx,8hd)!D;T~uՋ} H+w rgmv@m8Kk ᶴ8ejG 7@G%|ʒAK c!Oґܢ]V#wM`}២F/ rMNE!1Q,4U@vRRP'6N{YUOeBQ궊h(V(]GhڬCh`#]YxqLC+% 1Q~D ;xy"\$b+1`Z 1ijQ f&ǂC”ڧϿ Ec]έp_4< ,: 9wԠv1*kSJdCVmD(WΆ2C9mC*:V, u-㴏<*O%n ec N_|yNG=E=e%i!:Y19APibu2J3K Y",ZʔyosiJ 5m{r."q.OS{ GԆ7RWH] (~ѹ,"沮D,m~L9DMWW?]iſ":ƿ'2E`s /jf0ᆪDc,s?30nL"o䋢mZSek2z㉬24Cs2 Ϩ$7عu\9eZ!j]Ip*X.!pӠjO&5x>crf2-R6̆?c)3 c2B9d dCYћORw̍2nhƐo-xN0ͪ:)6oYEv׿n4 jj2{\|Ȝ:)Ykj``(7M'Q۔\j!Pr 2ݒtB]SQV)06 q0T/0%2Z)Pm,l`SDFcS-C r%2u#p9}:VɅ-87Yk͸mqQP&a<:ph;Uh0ȸBtNk)|M|R>nkR[k}#f镕|av %u_݊;e{걂K.O|2qͺ,{RÄbPiBv wZx;cTq/3BT=}"`RSJTDFЖq{Q/ȖK [kةѴ}H1(VlZbw^[ӈS1ESXN:KEE.*kLk hkd{VNW!ձb}L#iK8 a+Pܵ_w:ʺP5 aDS׵kkp`܇J)7,geUV*^BJ*PlzFв+r1x֍9Jd޹6jTBI(.JP1V*eearB]W FKd@5cwhkWy7}+&zQtz>KF4odmsRN%?!z5}zh rQ;5<;RZ `D̛)9N9 Ok3S QV6mLk3 mȴ6MZM[o9 1h 3 Kk\)ZFKLly SEX *x~ƮsAд]nvNkDo o,U`chGjw-J xl5iޘA9eӼhQM|[}MF߲_/{7mMۧD(8?t,b9wM_ uRIqa; #kھ o<:G+B n> ך6}?v[ɭ JI*$OK~9 m6o% kJCʽE.]Xx?PEҾhԣO9taFtBJ-ZkF#MW \ԟ=C]7p}|CqcMŸ9x=yHKUeE|#Főamdl1Z~,SkQRJ~##`TI 5} -{,)Xƹ-%8MǓ)r>[#DBpqMGNTO\!XA)V $P4|9DaCcuńJ4w(I[ԓ * ZEM)qca+0e 5<a2ƩhhpReqP:۴`xq>?]Ⴚ}0ѫ |ꨢQ`eb1wmm|ٳ@[Uu A X l 5E]IَO5u!ŋ=Ԑ-Mbg8=]]]Uu]f =TH)x![k9\.ݧ.NFd~fD/o3kmlaۋwK 3NRr4EzФIij.dR6-SEZr)-YVal~8p%[}ed\R+_S's42~iF1&580J :޵9]37}kt=6{y4Kd,:9y]z׬4JfܞZTX8q_O,pi _?7Qg]!c 86>^ o?S$ yOl 0CLVE+ E滿g33z~ b6̑eArF  $Al}+';j9KKHE5Q쪃 ">hj˜^FV XBdp` 1޹R1XːV墓Șy *\@ :[HքQerHG;9 l61gkx1h Xk|bCcU6r.D}w9F&}iRdsX6  /HYΞUsPxP2.Ae%\Q$?'H)eH!^U}Cޝ}Ay,5c~^=;jxt8eE%79' '{ c.92/j:?x{o;_=Wyp~= &dr1ۓ C7 1z(sh0{{1POdrƺQyaH:D޹)3²ofO÷3ӧ9_g} 㛢huyKDiIOg,#LؐA@ BBFJH')FPYzhHL{3!!h6mi>~F[-:wKF>r` j.ߛrҡ*X!,nGo@+95$T.Q4>ehҹq[M)eN4:C%c(i ='IPJHJDQ{˻3H }9 dzZ0wn+V c9&oR?EފH K`dmjl;wb5'BOr|ܠEހkx=QIw]D]K헫U5cE.3[/{gґrؚ1^lB37(ԄRO8J*3Ǔz_cFcnp<|+:To/ `g~_30lODF]57ؠģsX$b0]V1fy}M&9HP/:dR^y+u `ݫHddT!{BGBJ~uoC=Bc@_A-~ly:McNWz68uѢsviK2r;>>k. 5ZuZ3ƋNoҝ"$ h "FF3P+nǸعs+cRP>skf{k~*1wܵçOx[t\3 nHR1{z|TDozhŰ Um7BYƧY7U 4`ԩHlv-#`kQz&-%!|XgiV_wP/cu;:zAO6M֌"jsSI+KHyrَI+kM'Kډ*2$}ivz#N(L2TZii?[%߬Рw;?*~݅,|j0ؤyTwz|NRgMuNxN+Hk`_ӣ蓨a.`w{&X-9d0uF[i}|kݛ7Vv8et ^CZyNi޼v=[/ tC)${h $:& 19CڭoZ P[t9>'XHCNqtZ;ӽ.tjyƵ iN?#zH%p*Ů/G<Ǔ~ arD{pd;i%ǏL_?ߍs oQCu:wMߎOY<|2`fۉxr[шfgtȏ'm;T}{}VkU|nK&(Ybi 7A;?~=!Wb8`q8Xs1-|?"8<*i?~7:ܚOY;H-?K/t@Y;۹m.)H:o C-(xVɃ w~*Rmڔu`1p64@aE᳖&lTn16UQhu 1 2Qc*X((XdQ+KAځ`,%coJ!e ݕO\ZV&Ωi9EAR]^Q UC˛!z! rC,q^7x/ӱ L&E#1CC MP4:Ch̲d/CVi  L^$&Eb>CZUZlIZDHY\ lY{ފLD6kyMc%JEdzdrHn*+RX #F''|p$ "ӽU)($l+>feN:!w*SA2mt"(ǶWY{&iIc[ؗZy9ؾGٚqT p]I^ooېss7N! ޳ʐ73q|Q,ۻuvXWyp~= b8dr'h!c?L`~}'?T*1ќ&εQ~n``v&\!+l ϳ%pan_m_+0}:&ND[tV1#jKPPBNm c$C,[e ɁKTKU-~(o*;UYTW p /%pcomM[K8+jL׽ys#NcH88<ą}H)^swk ^#xǙڛ< ^QԜs-η6#-+x.Eg*uke SC͢ĂA Sm$nN{ŗ0" KcahZs3b܈,#X~3f6{-mG^z8ܩukf`~>~X=)C>%9=C T [#S~)ȉ{FBzW+N~UѽՂKF v4J&G f/[556a|! SV*QL #2"P Zs,sa8w.pN]ebH2m]2ypK]0i^,v_]?|]FsJ^Pf8T uz2ア=a ^n0W-}|;l}y-@>%-~Վ&B$vv .6hŇksS.v}xA".,Ѽѧuh:hF2!n] WEÃ+#Ӹ|k]Q|~<}?)Ǵq b淋 Qfa- 4In$n"B"F'{3n꧳=W(e1 vח<SA (S0 nn{OQ (daW봻:ފ19x@\/+T} H9rb/-DH#e\]xIUFtQx:=I *}[ }o啓E>^×~\+%>~-ld/zqb?_t?.seq:?^9[zAK}\ ':X|߹oXQ +qϝ %H@o[k8:b9ʨD1jW:p3&RY_C?ןA~ٟ]lQF#AW;smӓa&r-7 |Ggk̵)?͏+pe4n6gshQ<MtRJ:\͗g/ _L6:[W(ߍskUPphtq&>5g(fB^O>f/>:lpn|Ge=ʼnu w$`;[~rT_e{VnpL);m%`u0!2D" ۉHpt$Fԝ8ΉD(鍁ʈGU.x4*T;z ;BǫZ6Jw8 0n}{Q^ܼ4RKA۷fn"{X!s,ckERGR(֟_֏5}-{t/ WHyޢȠR}y"'|#c5 ߽e_%TEsF0jG^ ?W)+Prk#=@S҈!RžB)|5`>mߟ7S}o>କ"n~+RxoF=k tbM6Hk>Uғ"TwRC~Z-d%\(i4:Vʌm/XW<[rI}OHBq X]10S,B=MZq2Mp 5)m&nŠ>h4| $Ůx,]DB>:?ߵoqK2ŨV15FKxp H3kUr*])2f*eMZXEO4%Vŭ"ګ&, 7%F"ab̪tJ\1Qyܩ SGq+@ ˬK%\]za ˃uO,a|NDܸ$ҭ&G2)Uؐ =dKħ:bdo'!)BK ~#_wjC\g~y}nxA[0DIt >܀t- FtPRZU]"Umz !=+ P"0(7ZiY_U)˫*l)u^(/UӅqW.;ta.Te}i]|D~}{?_{K]?=JswyVVuPNs a}I\qAFQ=X: ZXaȱFYjQ J(t2S׏4wku+{xDlj]8WV$W\\crQq"%%6*5sڇ|⮎dsN9?̉xtŋ&w/Vӵ*q~EUíeNL$,&,1t%| >7QzB;^ dkp>OeԒV2SBE_ uul?mt4?\Nt̛jt狳f#բGۃI\XU;<zqpF+Y5{gŞT?QR|:7ȚE Oɑ.'|rd E6k->I&JfNodEzDW^}h&ʩDcQtbM& FIʥI;tQ-D!ƙJNȾvP| 9{g jMޒfH#brUנ4\_j>?^m:kc5)"Й39lzNa(Nx$VWt7 e\!֐R8Q ;WdnZ ^ro);b{*"JOkQ!Qw8Zs;[}GX (%((6Z N ]=FD0sJB)dX"Vf5`f+؃39'pW Ö́|n:eg̈́\ S!F?uVNM452b{*AǢժȶ&sF9N~o1Y1Ѿ, M('^F.Z $K`;̳wS {Z8G%0o1}6NXuY4]*C:(yf;` [\@QJ&BJMCXN]4^\wqCn*bhN5+a>%Vʯg67f=/^mښٻ8nW\yM4@@uߒO[6-ɤlǮ]r$rl&AѝX6x)COx Yy7m+VGH+iݹZݨk2~eTbʮ55%7zj8/y{?q{/~w?oG 0^{֚7dW[;opt˧vC+|1y_w~ ;g?t3g!D:yQߘXdD~U~SrYTk:ԦKnbDFw #MHkekYR#LukuO~, Dirc:AK]Wn`ׁ5LY:G|"߮/}#ɦf\(?fQ{RI?3Gb##;}k<i)[Rzlnoҋ*sLqxq[zTוllP|?Jt%V5;,#0q>TbuRkcqQ-E\'6!UGΘ%8Wfܜ({'ي=w0L6 晴wg?V|lz? OOZ&nϨ_1ޗɟp'\:gqS"?1ǁe W[gZQ67Fm!с8kڣc񃣜[ztпG8[ߍ,(@Y?k%% kd?erQq] 3O‰ј6H.K_k#=7nXqdG2=k.˼׿!@jWr\9ˎ\O%fR_U__zWBOYýӂ' x1٤iTSm ѻT {bT`JS'3)m' [ruwKo{Yh rLN[bAqCuә%톘3C{!F]jkDnRr@g$s ~=vkDh)y)#G(G2 rF'Գ(_kq̣ >2+TȻ\f1nt[wxtfF1ܺûAGADžuFaB{wYg|xː:DyϞh|`8&WDŽN3:c)H\5 Ҁ7V;hգO|ȭ hD8pusLP<a {Z{gvb'Eqi# kz԰R{\c v3Rnn4ﺹ@env{[+>caDW9=r68)ڭV9Y(?w*թ75_6fV5i}Xn2Yp!_dZKqyWO(yu<;ѭ]yPWyt*(: |*Dgb*Vy׿E֜]Uy4fiX=Yɚ}#2[^ν1@=NIWf2}-I2م`{EνxcD=w?gev+S-*2Fo?+(}c qMiX!laUkbv]7v הJV %5yF-wyLэjgVSll-b,QTK%ji]Ԓ:LK> ]E\tHPjI%x񾣳y.+玵b- `ZƔEZ" z;&zl&ܜ`nyj6c^gZwnJY>sq.,Bث ΰpUAG fyJT)MN & MYo.mTYk$ T|HbNJ͙VLI҄a+&X Nwu}/].]rȀ97AA`5hf޸AN5R t|+VRXYZݪR {%&œŖ5YY`d{yn(n ZS"|Kl S8A缋h%τe;O❙46%h&|hU%1tB=w[l A,م}qpQ1ߊqbPႋYJW k($!*NJbpFk:m)} [K9kt?OnH@MTvԾ_Sf`^P#X0olLale-UA΃DI +?7x~cV@]R'B`ڢ6N6MͬZXzKYU~] LjDԢD}Y{.-O!$hޓ:O<5-(nG@/S)>JzK%#5ߠEBފg~VU~Abb+%NXHAMg* $O.<XGHKpuNVk\L#eL-ٱ5ю!"_V_9$zwFԪՕ SPG!4kB`C|c3eϙl#; 60j `ǑtlHxX2NhcW=fIoo }_8ہ^~.d?z+ljenGmd0 F7 j':TRׄYnt3gϝM9̪N~GÖZɥ'X4ضAĊ9cX"nMB Ns~)W I>%cfK*͋놿27AK&P.i,V] q~ M!B1NиAyz1Ax?|e+ \ }liū%>g3w _qNU&;cˠ cIu{o<ў4n ǰo(HH5 =:~r3~A=D&,~-n'?ǿrd\7/;Z7gl5̗G`{2Eؾs5s H෎{Zag0U+zW9 ]"Jm {uLNc2sp))Jڌx96ŢoAj;7]ø瑈m#GE;m|K|d^,pI6ۑL+$cZe@VIV>U$aR=\fqeّ'u(BqZ*HRIe2`i 9|8quƝ+{} V{V/g՚VK Bk 1RrTO* KO%#jyDfZ BumV-BILS-y(fqYdvmOG:.|l+PSDK*RJk֍ [މ-vD8f6fۭ@wAo!/-xdgэ[lFI ,{ (wrgvISv2?/Z;Cygh5Ľ[~+򓂜HQ*㧏l"u7Ho괤*AvZny1NLKTۂn)\$ rCi^&M26cT(U^=rJߚQfTFi{Bkv1 _+S^ny9F-3ct3pdw_ɕeZ ?79!? &^ǩG(˛0?xDcQy5?&h,^Dլm[P~in5AArKh1)//)R,चYd ʴH5:X,pILYS/ks`M˫+d<?_~\SՇtrD&6-M"$#xV>=aWqZ('ic?n}ИU4/?Ç2(2{%5Zi4b!V4 yShFWC.0{{⍒:w`ΐT X/ÜQXrF7ڊo:i~?_5=T^`K%/*lHF<+S4_*zs-t,bui[[4uҽ>R2NS]0mRږ',r#2 ,>K @>V77Qf aX)ESz0DIU8qΗVb]aE0ڗPEc(޾ָMں)(.|\5iT̷H]E7o~1AUzIAJsM}_MLZ9N9^M||dI>>藏z[9FCUK̎,Ryr1k7dbW1$WC6 t$xt@jUg[Vٍ;A)byRJ̒-C,bEw&bQ܂:IhY| V4. ;°< iRu։b:QtPl";Gx% ,V#Qb@`v+(=(6ե mE}\-%֒3mu* -彥NyD󚳞$GAf`JHkd&Џ ,LJ;2  Zc^y j'n@q FͿ=C%q3oT<{\Z ͿV|3nc7v>cݎˤ&L{GIÉ1'dYhyO] }ZZ0T9GJNT/#bWw^ŞxD[TBfb7?fBUUڙ>OJjJ&e`5BW3_Lb!UR z4*׾ 1t@o u%-n$n/RH-(M"/MXAt, ` d+KV+lTqe-1#.+t׆ªFp:akb K;5ŋ^5DD{[GfF&}чEvGwH { Bg8L hD}ʋπPi~gTN^ 'r...mf ed\.uw_b;Pa@B^ݗЌ5L(fwSV[("hxPh;5-5>}އEnlGlęyAfp؋ŏ|;>4 ZR^ѫ ZtlDZ 2hff'ꗃ/h:CN y6.ɤV;*~7HՀb#.[0zu9x^߆\iOkr(Mx,Qӎ7!~/=woo*p4~ *U}!(. 8{pfiǬjǿV.'GpOHX`BJ@RbsbX"ɢX=F.]9'7Fl"H3MddRq,U_ɴn1~FN:~[#|8fl$ ?DiNJkxEKDէ+0P=/+Xi EB$7%8UޅWOC]:]veQSi>~<,f.H3PB*%S""`wRq-W}p9 8e({ 'k;#-d6  ;zI٨ɹXQ_ :ZnDnx4;MNSf[,}CX^HFh\~5 ̖t8 ,#&CgoC⾍o>cbAXWq &5~d\yϢ {jZxxoߑ&=\dvcwmu)&z\g`y,Ov~Wi@"pH(XK纺Ag I+m. վFYIλ>;z#$+˞$!*x{\Xs&sDK.;]Mmތ2EJ5_}F?p@p:qKG^\"(V909'@&A!ZU%d+ ^lw%[PoC_O_H6PkcB$_B R(K sFURh׸[עb(BJo0F Zr~1 )|ԕdUPV*22!H_7zzl1tpSs4͓9-J=)m$=)bŗ l^?|qےW %}/K|)K+h%4gJ8".vM} A O20 Z-Q2I;UX1GzUvetR)O ɜ1QŁ;Ε)*= ,HU@,xx, LJ$U '[xCz@-0gp uR're/p4adar~{]CXLd)Rƾ#,_]r%lҼR")\7 0ybJ2;/baE"MD1J:sqzDhjmkP%0&l RlWWF`8k"V埒L9j&RE 5B*RԾ#J OCJTz ]*h Z"V#>RAk'TX:x%c Db(AI&լv#;iՋl66F3'>Z- iF lcj.,yJ&E) l4 F(+_~>pPDbRUg@ono*o 0nÔ[KBXpTzJljW" #\XV c8DJJڎА~cR{ vۄF6s0H$B#3C'Fy- 0g5$-FGVJ䙀%(S*\]`wo0)pcP;Qa!IIm[hf$1\Yq7ҖWN%-A\ȂINi%+8hU( kI?SAnۺsa#!tJtyckO5AŤˋ‹?|V/|ܜx>9z*x1'dQKHNHF2)̳S" , b&bh%;e퓷%tHi{9A' (E5KS풶'dV^#>R3<n35 0qv3@h/:x;t,$H7)gqP/RwHD%S͸o Xe2E{8U~ ow6d!5t.AeOsV6ηo33a0>9Z~nNRu-0`Nv"5]KT*Ɏ4(s8Q0q, q%SZ:]2jZhջ$%I{'u-BQKg'~Ck7Le-P3Xڈ2#?G %GaH_{*{ǦTQeR Q*,rGPDR F G-bkk(Xw_:vTW (38BJ ϧxXZ&@0RbcA%J!u`Z=>QGչ$f=V#V;=Ee5^Z )YT/`z<X慶:3>P !rY&8x6gA?Gs f*RTjUp_&0̎rFpmWFpmWuAZ%xrpJy^*]r\$e,bZjLdkUU/`oS9O+=~]Q{[מ,\q:F a'A ˲0t{`>%jwZ?͸I'V'Dm>#NMEKCvN>c1%aKcRYyLw/yLSv&ެ-3SԕP4pLqoO>~5lDe0Y^9z%E33܆1DmoI2xn>X\*kc&ImK0x%:] m/l+ 5i$Smn݀;IF'pyp"lOa/ZzlR9cz`&0Xu%k̅8 e?HFtokȐYkԜۨn4.@pңXx)w(vjfrR ӏfyBw'ͮKr<-tS%f/P 5L.,Z0BXJN)byBYip5,z mW~3I5Kk(p2ZB 4d)XsgL?9aʮsIpQ}Q>YjcĶ٥J!Ѷ>6b,)`d&eo(I6 *:p&A0' 6Qۏeq6 %(ݮwQ u֚嫸'_zdj-82c2BiurD_[؋M (EGWYZ83JZ%kNuH ݫA3| Vc~|& `ҭIT  4'pPbT[&4(-%"' 4R*)U*b$趕{ SpLnffQJ?ь$i/ {AL%b$ !蔥N`(pc X<r2ٳyr ={Jn,r5kKNVVroFiŨ rLbf,`cʍtVɈiɂ l,8Qstpb l N=4-kqGLnk\(gXO3'^J '&vZp ;x<!KQ#!y+0%yYKfvG|BAfK8^`B nl4v!@ ўhbpXd$jPPX%LBb`/0ʂz+A&NFXz-mHr,yLYid!hZVx +4@t@@zDxLK1z3܊*-R.U4ݧ UG}~qD+ԪQd _T9ngP}0p1jE$W) 9傣=Q5V^9-+f7nPhP8ِ\z'utr*E5腐J9I'JBӢP@Yk؞Ws~^ѓ#ql $@#Thhk;5}\)Y.G xﯗf#f=TN|gG6 'ňuF0AV)qJx)dP|m FtK'sth"\[(f)5 Nђ{0 kp ()15BD@R`Guuv5F1? #_IgdW?4{kR>~o`Ư~}$ 0 _gG)A}_÷~!tYscFS﹛b~\76 LP šI31W—Od!ދbSՖhWSE~yb\0V=`Mmm Nv>xܩKYaFc}Ij݆1QpUU6`vgySa5cĊtxwO_{` TW$SɠǬV (BU'^>tLn0*WI-i榙xEW-u0ϻlM9II4bZ+=dkB3 l|Ӓl]%O¹xf x5U4U7$ѵ@bfXmkgCkgHɞ]8G{xNIh[,Hňn W[sao?BKe)*4(1h1IQKR E15/VPG J VQo_WLQ(P[ط`JVg\r\ˬ[{D+gܻp' K%Z fኌ&<@\o)ظ",}I]0 a k Zc[f[%-Tղccq~7w?Eq8يfRRp=/2VfM2ʼnEߢ0Wr,|Cֽt+3p!%0r𳻻V6ٳ”`֣ '|3qqɫOPN6$#n'Z(.V98Q*= poC*pd 28c_xg(0ˡqY%W;M!`xmt=W3]EZ>:>{}1YO/WׁR2{|{Q_ &mxp _0 Ÿ4C$Gu]>|Xgwxݏ\uCܘm|]io[ǒ+? L 7  U!"2IyRMEu}jSUU?`?WrEȆ˯^N392:_HkW5xte/ ppy_OХ>7depPcxjې,F5_ E}=opL^o{TbuZHwՋ8qa014Cb]/cZYn?.+>g0 Ɵ+rW h̿>=ߗoq3'zW2Zqf^^* ?ތ.&ok,?h}e Kk9\5y9?/'V{K.)bU.h$~X g aWH+tZv98˵,+|5_OGH3+ފ*{lm X?$yAZo{эQC pA:ǎ?6U-ʎ?v*2?yj(lP?U&z`dqN($B 2ӎEv,X B";ٱXF-

l(*Fp{9IïΠ5. .QDQ:% x`Pu:QsB ⵽X3aCn]H9HEW5 Fԝb)dqu~]#sl浡FjkJ!+oA,}8p.v`(ʹkR s:=CrHCѰ9<\pUc7]f /_jٮԵn~"t ^ο ;o)N\M#Fyt52KG5U_ͦ8 8h> Lwqzf'08?rʂc.~ "zI(0{nuXd -R82y׳r}5Weafyz[Ŭ x4µxj@4wUjxȃ!q^5Чa쮵̮peXB9'ȩKGǗY{HC zx&dkaɓk۬rA{[X*jaV,vT:݉($!{E DqWhj"c&v:әLW*Mᚔ&%X~r ,ZcAXVs#8{aArfN2Ǚ6dػh(Ǹ18*V0"<''A&L+2}b9GT$I ]-µ6W0AB}rUj\ RլͿ>PD'%kc )k]|5lf;8!H98J( zm3UJXwڅ-^&F`xl p$+MM&Q҃p Im.0y⒒;v@ԏq!Xn 24=I4t0f}!96e"2ۜW,uhA[AsW`cg!oq#~ p9,Z(zv&gĩgI 'BDYlQj <0\I3Raz5rT.#MB_h\OhL$#55rmJr!#84ƒ +Cm{@q-Op `%[;W';]GEX<`ip+T#hqJgu*3&O_f +';py *K1ܝ^ a E+N!a qN4S\@W%:WTy )zeDd& n @J4 Z8Oډ".F ~f*4%C u5Jqf@T΅L[kR ~;>wێQ1Qo'$=YTi)ٸ }nN$4:7ƚg E)zu/[x T>Ge-gi^~~/ky |%F-ox{zY}z{߿>k)0!wԎcWT-?=RF Օvr p(Ę F%d҆(6x2\GHLq zp>=D J@%P$r8W[KnIf(ٹ4Pp/haQڀ)SF(,y WVsHL$0a4D"N;1|=H|\++ sP/n wĂ :!J4L+J d.sx)6ry{MtH%glE$nԚ5hBJGkjtDg$5**FE֥92ɌUqLdp58MeCN pMS`y;R#!$NVԨԨ5m$#O ĀH V(.08 Γw0Cyph1JH偯nZb}%Õ7>0'yK*%}ʍt7.ʾ p5ŗTvM׬e}\Uqb 4Eglq7yN4Ku[-rZ@0%ZG6Ǡ Yr`;BH)8'AC$"D/ s0kF(C%Kk,X tPJo)ቶ :6l`U(:ue + woiq@bw.F㊸833%3<Bţh=`jso)^FQ\q 8J|DZ5xKEBȧ@} D C<5z:dJH #Ro@i\Q]QoD F.IDf0n0lWJ  D>Qh i$)x3bO=3 D>k|̈rNRTX[TOUP iCJ`c2^9š6MTO @@T#kD]ƪwm3%!5\+T=aj±+e.D :)F\bl.7@P5hxUhQٳnG7|1ŋI$  Ñ0#-&lm3%ӂ<|1B,9-թ2]pi6?ɡ/hUy0 Gs`rM3ȟ.:#' njJPrPn{&`BF:_.g9QmY?h-U; ߚpBsM3Rީ*]]$Gw,@7x7=m0}[;Yw/@U8}њa9ء=@d *<*9rTh3ș.-*df[_fef5s@bk-g@s[U8Rc$Zb#c5ڹatƂK@$By  JPe"iBĠeM@xN@X;VCLOeP`3 ˶;rI+58I Tmk=y}|K;9<],w]Eq]$M`[lDNxx ;G% Ze߶%=' /!*4bͫ˹ERY+G%#1~@|6~g?v2~pr\;lU)0{n` \ DI8X@x@gP&ɽX3a~hwOɵ giM~!d& ݹ 8ּRMYJRrH@e P)m=h{?Kj6KWPeq#cSۖYu{ȔT¼7AaU4!DJրR HPEK) N2m1۹}ꞗvN$DutUTۘlD5%j@MbZ ^r# $u%i (%5UV?a$j)9ήg͝Ytg{|FVP֊Sa"X[r=lQY1 zqG%4}`Ѳ\ xWp~upX?eE޷('#I?0Dlm~.;Z9^xqgTUwʁ-7ۡNu__Gd+CݫdIǃ9YL t@z!`ޤ?mpz)d7aj%=ֆfV^kukDn2}inUߛW_|eZ]8|qqG|P?$ \Ɒf %RSyY0dN'<u!IŁ1p^@CX#t0kelFbTAsG P6c\(h!B鬴](T.D m8_ %}PÐ|cո <,1;яK͠^⤲t,1 œ-⏝Xy-R<c渆6]>A[ ~$(bFFRrNd!|89_#@w /؝jb@C)CL `){͢ hKJM*H SEl[wEYLMLm!&O؀ b#nV4d̖x]ebkTjz3{n 6&Қ1c#D-[i7{A~;Ɗr;L/ڏ5Xfsj]vԭ$fXER WVY;j$2Î~n٧n<1Bۀl-AvK!!߸F8wh->;Fv0ٴ[D}[ ELIcL"ܫb;SH=k_6˫-}NG'8Ŏ2;OObV;ϲvGc< {|7p&L1$yyOLY!_TFՍ 󅞄ɧ67z8_WZ~uWޡ_T_R@Lq4 R'ό9O$L6 h౬YoHF=EFFSr}EFGgp࿮Z-r:&-dn>LdkoFȖNlŲJ; RbD*+XHb5ZZ PcFQK&kNkd Dc!: EogfvI-/.ZhF.h%5-Wu{{fEI覆C4 ׍DC*^L/RIA#Z7XU S  `ՒA=!jIQz 4~j親xc1BnLԧҖwvcoxޘo\Dcd ѻڍ:bn<1Bۀvfl-z-|"%S]S2; < @1by;L%;QXLQOJF|F n'f2_}޿>nU7dBLѠ3ׯz6KxON~L/p6csx\nBYZBb.Ɯ;]6'i%96H4BFwi\J•-TAJuUQr]5'`-fF\琺 3|adg(6? HFIu* \u+j 6nNsXYm\ zԕ`47!{ C-֝j ~jF yϱ6vpbRlhLqFh7->;Fv:0bϯ=Q_BBq)X-]&jXN!mSo͢'RH7.e qGn:nm't 6SNƔ0= cd-#2[|`֟Η&~D%1F绋w('#U|KBF"WvO^$"{vpW*0;8! R! %/`9 7f"!d0z&">^RJ!J;yÁ8PYEK@ߘ>ub6ifה$ů )'6T`XP}}[o?'Q ҜPZ ΘF̆(+URR f@+UE7 !"ITV ڀh6 c" {FhOEu+y&J |IQV8˶SUDmX~./Xż hk)y{|myclu_O֨>/Ų{lNȟٯߦb1r I=r¬qc.u^3L5n Dߎ 鉵Z;>|zqƧNhqw5žz~ݡƼb1tQSw@X5j |fP+PcƀhVܜx`{F%l1im>YkcxX;=H||]O+bf2Ϻ?W'oPc@:hZߙO3BQU5j˹ {gmLǢuaϐ#ؚg挚g7^5f RD)d]1@ܻ'^w tKCG9+zdUb_AH.U"13ԍXқ~r??xҭkx ={Y{ŰH#;tv`(? Qs`,/jOYe0o#t+P+ZCk {/OdPOR] 0t/6E&<];,FWFږ#ݰ}|{c7ʎ4=5Wњ^i1b$y*fְn s0Wu"0)[N$ Wf^Dk>E[ - < ^(Q(l}^P +p^EI5`^YI~^%q?Vkx$L5;f?BF)julJ9+(0PLGUDYL":fC5GEFtMW)O gHbN{vD'%$.xeBlv[H VVφ`.hư20&omwwo!V!^mC~PwFpj>͌.yeF:xzVU/U36o2#"f y$'"UAE1ij)˙> VLI8 1 ݧ_=5Dj F第wj@JG ES8@A*8bf>#p?.AXʱ$t0-}& .QFKt ([#oz>m~1ٵwUrZʜPAjmnp.%BJZʧ)3r[2v㝁,9Ȟ.=Ue)z=T t$v]'8Ⱔ|q0ʌCw/4|i8a^nu6}M^ h]ߎ*QnM^ @ziM.=M^CV"Z4<)cp at l5~qb>@;WXDX5˨?Vs&`=0ƋO) ^~|9ޭ;;#!ɷgьz|H d);@OFtEl?KK|Ws&\aT}'p>3'* =&-s4Txb`x}JǽqN@!@d>쬭uA/? w^J@t@0NA#Η)!l1`>!eiQ)YZ?Qa=Q((dΩPcu<}y MN6-j c;ߺDc%0q`uy.1ڰAAS^ߢ\rTtJ C-0[՗u{o[_I^v>i,Q ++W_Pͧ0_pR]6ՌP"Ja6?`^JT5VƐA|hx x=yofڞ*#Obz=mp˲sb`wm2уwk 8c+-%}j'\5P`45%cP 1n0՜s L!nԈBjw 1`[Xb(E$PJO[\5n4E[5~b D1M4s:uG WF}we W/m؊?mUӓ~eK~+LJA=x޿{x}bѾa_7PysgkהO?}{a k0Ewbvٚ5[wh[*'7o0(MGw>qʅ$[򄁟q$es@IdrVYCRCBTӳ Kv1ITDrؙcg{Katz.kp}QzA\^-+prJ@hn]_+a&SXǢ k|MbJp!W__f7٧c H =&AsĻ8M0,Mv37w>kB1_XUwdo oph٪qSFg!gAϭdU>Op3]])c HQ0cT8C9MKH) BRON8W $*hw #C7[ȘAɥH*o4R 8Д(ALZ2m Dppm_ Yg;`$AYO1uFcO$;U8D)s*py'S|}CjB0B+!o֋(vJwLvn@p g6?OlrΏ*5meݯPwbF,|-xt}q\,)z7뤾ƭ/>_. BS%zy %,1}NY-A kRJ+GH~z%'G~zgW#O,جĮ;by+(hgP C`QaX:O:w,3^XqW+@ ?zMjLD)Y9s[\\qjd?%V=4SSgJ-FF oH xuV٭wY xɽR}~mZ=aFxkWC=T?{3fay ~!_-s \kڮz7^X2r7{5{qns3M#ޫF |dɧ ӄ t*CCu𣨄hf|y^,C+bJF}_ol}i"}o#Ew=NK|AҔ"u&Ђ03RĚ&T9(?bP,]`L3[1rCf붯T7ޚH{smn[=-P"xw?n F<|p|2CRVnH >%iWÃ旅IQ"Db՝\?ȧcrȣp+h[ 0Laػ)+4}r1<- -iDrԖĴYW3>}%1g3:ᝊw+pnT%C"(iW2j΅k^;|">y7}RbKWӥL6K.JAd5I]ii#HΛ{4mLһ-ɩϻϚ pL΢s\! $gzTJ8ft7M}m|gʹ"|Y'RZP.v '`׵6  ,ϥ NJEdr $0)r P!qa)cv茒za&l6fo5.,lM62D|v0 ZێWF }K Z/)}R\HcOKx<}K" .{F-'CX FާmZ4{ \JO.YX~ۑg.I2Mz -::Fv8 x[2u[E4I?dݺKǰn1X#ƙ J?}[dBZ6$䙋hLr{k֍ǰn1X#GO"dBZ6$䙋hLiz{֍WѺbPGtRhbݎo <ڷuK&ukCB&ɔ'.dݺ)}oR1#:c4n|Ete/u-к֭ y"LqX/]M|S|O ;'8xg>F1+]4ȍVŰeּŘDƑwWk0TrT!L;nQ CTB_I i6)K@sIQy4z~]GiRitYQX,sƓQ}03n#ʊsd#wv&*J/ 6|Q9!CkVBYo%F}(X!+_#%9=4h9>,]@l0V֨( _-_}<D y Q1adY/O7cp13a׿c@YbD{R&_K!Ο~c}ڮ{]Tp*AE>0e52^/3\D/?m3|rޕ?~OO߃,^OqKo;x6ռ˫)_8v{}?pSo:L2m3/\  H77i,$|VбpNys4;CQ%)9  <+43,(͸JB4}$8W^Pνpxl Olz/dz1zU@-zß!LTn=r_3 ,,Rln EBPBmZy̋x/ hrcчg(ʗ=˿ ̻O` qC^dRm惞Sr[J5Y ?SS5 ^ 稔 JBI<3Ϧkp\Wj֬s:2 A /B7ф$g4h44#3 4&4UM4U1[4qWP=0"P)ml[d?`0}TحKջ4d%%4F(SEj:U(M٩4-*ρ][]qFHr%6rE<79>`B2XB#`deN(E3L+lV>EǮDXTA A;s= 8$J^(x_>m3*;l$iiȥ/8(I09WL%yA"` D詻Q͈RH{">rEV qlq]_5S?^嫤|,׈DKd(P T<A _!mI?Y@b4P,HX|`xX9!0 kD須)BmgnX (qjySEjJ )5S~"֬GW@1\؀&0X82ʁV(kFԌ r2zsC_N{>|`a'4}5,NcvWnU}ޚgY֟f,EtW;fьJwS4E9j-(NTb_n)#F+Pf2mn܆-LywM8>uF Fj/Vo \JFu5"'T+8qT'@t5,䧭ȩ;aMlB%=[Yq<'e4([i/o9|'2|iy,^umKܘZ -tEb+ * CU a66z<*tr=HR%RHJ݆Jd{vl2cѶLk&i8(Mpx7yB8w, Ai:0qRsQ0~>v@LJbGhaK=lb>_# ڏ8q:XDo4 4^OĘ CY<| FEGȼAbY`Xasfx@!+,p٪w7М]cXk򴮗h<.YwN)%B]tO0C*xmzrUa"!8!W3_Ǚ:e`kj?y ZtVJ<9lM2J;cݽuk!0/í@܃le/,@X~mfy-,2s)&܃]Zb䡙k%ȸ !A=Y۷er:uR"8xY3%ˠRKt)}uD;(My)u^A0ҷUUfr8l ?:ر?+U3ot3/h<ë?/}u%ETpŽ:JY\xw0 LF;>1ol[Uyԋ6)N;R/B~v,V.ܑLw7&U [~02Yc 6.v?ዜ)VgjA$-9*cAMútVqxs@KĤ!ӑp89[=)BX8޻PfNZ!B;*<̖+Xgde_.GA٭})9&Q=.?50XkqaϯAVaw-;X5"ņzb&9#q*]TpW*Ɗ=?ERڳp F hQ`1OQtJtb P&sfOj_xC)2G+Kfs?~ Ixrh{C4옢 r?:wƊ25+9yesx5G A ȱc #sȚQ f{$0:شf%<SXlsNA=q34''` 9sTŭI%lM In=ȧv1[q %Ro5^ Ћ-]3{ޟ%J4XQPp/!1ʯlft Թ\PڷSX ,9sjƏ禳VWK0Ep?O*@<}Ĩt@"># ?pcPpƙf=D w$-\2OHTЫ[8AXE^?*LZ/-qDSASAVi8TB#`y𧹻ԀW)T< NqW : t_avpK ){6a+"aΑ) w8)kMBy!B̫~'_#P|%}guG~00ÉqR[@N EռDU *x環~jzw5XT;bT)ٷ`9 V>*`сGVs%Vcu4!k^j̘fzDk\}8b?z92Od7VěԀY .D>sK~q> ՀE:WuHO)0"{d:;;܌Hy{E#s~Y YOZwL>w׀S̲J>>RҞw;Tn+0?W\A bár)SY4&O/4 ] LFY^QU 6^RF.<ر%F6[^KzJS$Suk_?o /B4{dʧ)\`&ȹ[ljm ] a1Owsf"i8RADD|<3_ ^-POSݱ 'B!qLMhNI#b5xT3\V5gK@}ڸ:XXjgpOyP"u8%iabfM>Βn"._w[߾FatyciFq/7^{6k{./ouw`_nV=Xbb>Qe?,Q 0ԦnyF'J?s>}|]ԑ֒k$\V>\ֺ#NzLq;1v |=M1[6 x |5x`[{#h$Ȁa!IOQLY?::fmWƋQ0{sw E ""NRRЁ$L$Q"xj8DTSą2I C ɱ!64/lL!Ja F:PƁ$b`@+ސ"6I&%2D%xYJ*y*IRm$eqJ0HŨCSRpxd@BВ\1( 8p'3L }b* HR 5"%Xj|_b\jS]N=NC=E HiiHRtP6~`~iH!Y&QA'%&R$STom[IdTå%fTDaSQDH4^ I;)2:q >E4FKMb"#K?-Wb3MGgfk<,G9a$?`]).97p?|p~疎=oaI$5#~X9+CyyU,lמ;u:j/؟U9:x(,%D3f,%z*%7<-R&I+tt><3d~/ ?|Ok m, ?H{μVfyZ/8-/6D0_t뼡n~yFo~:70݆{TM{R_s)(DM'k-ԀXpPftdvtb)Q+ղfHwnͶ&f5k 5pQMGfetE@BQ\:('2քmQᨚZi:( r /9,f6mب{*SH[:a#. ݲQWohwV'mKl$cT)Qfa򲁶p4mBcMlvhlf|`&_pD׍5 |& ಚNA8ljT1 yL*esL`,m J"\^CJs4+a􅆓})!(w y0Qx>J4;&. {jgӉ*c6~|X/yc~7>=={`nܰ6mhq .?~?޿qݠi+tKZؠC؁3 6lwxYotnےy6rd.7qڹ4 w.#@tmWbÉKUvDBaC'sd/fv2ɝ T|cCrwH6%9ֽ^.5pyդR;,K81kNU\SMKjrrE'2}zrlvP ]Q ${=N$wRR*q 7ˡ5䷓9Yk6-^Bؕo'R)Rpv}dvJ2_%7KvMƤWuRJH{7nIߧt=\{Q7.g/v#%>2i78ݔ)xmwo?};upϯW`| p #^O&Uϋ>[nr|eKpNSW1|=n}yt\5p-aTNzĉ蜉z gIxkx u-$$s8wN^n}G1q994@'AՏJL/ѐQKoD]N>4Gň$tm?Tl=?>=@BkD/퍲ߕAVTqE,^{˻ǔD^;)TڭtVZӼoQ).ܙK5Tfswj;|z ͖WFGa#`8Ma,wih5K9k=ܜ&C&+~}Â~-*f D,,\k>Aa#̆U:w9B붮l;Evj^;pT 68QS!d ab,4 #qQ¢i"XEG5NkcrUkaB ǖ19ΰ l_̖]mZK BCaW!1LaT$ĉ*XJ2X ASB@G, , aڬCmG56q"(ґB pCyLD`R-(j\!r 1u[][Y:h߫@ixBS0LqJ v66:fiTsL"1$ѐV|DyH) iB">F;hD9#Ҷlc_!2P}1 b㛗B2cm!)/3>$b7,-s:RuJ 5E!C-HaIni`(E90тֶ;>BzƂ Ut!|B`(Wp>4"ΐaF gVG }ΔR 7 p9K qi"xz0qJOH." x] +Y+?Z-X8(ҁ@fhH^LlNr "*.sQ4[";ApwaMz#(oËx!F20jQ;ƑsFY R  H%N iHT D*ET&(#xWZH VU80 &J兦k{nĜfQA-v=s5WlwdOώ'û `܌o?UeA:O/|l?)L?>9bg#{6_,)IڶYȗ4\O3Pck>D$TS8B+9b)dALA ́Zb){fYҠX }LSzFݵǐooװ1<;"(h_`<8͐ 4Z <":P/Oj ;;B=vwI:e)CM9e;s,Qb02HC *6Q`( T5!Ü i:U'!7_ ?%JDj۴wG2!A3N@"<0!1pťYVݡNS Ê ~~8ٖE+*G>Ӕ//TA82LҐ,ࢵ>f1Wܣ9! x^B_ED"4";ƴm8fD2v]f/`aA ֌ѣdP a\Zf2Mq*2JCC A4*T$ lM5魚v{f !zE 9wjYf2;` ֥"gsC4wӇm ڃ+1攖_nEת0ץi)U:d|t-ŷcN13MCKnP3tJd,tpz9\g1"'޼PAҎK<8= 74w~4mpS$1%n.eۣtowY{jx:e" 2$.l@,&g՗M.i"S3L!ICŝ+ʔ(bIhpJSʮ&1;VDѢ%=K`(ۗ) :vcBWqM.RiRh b_mZ do{BG~7Cۻ-`paj. Gq]˭> -n {W:PD@mb HЛ?7 pUu9 lmFv%9ey&[D[jm<{`Kb CbՕ=9j,iWoXVSj)vcޑh`VGcR;3;CSHF,3%2$4RaaCcԈ;ƌ^lhWsIS\J29r<Sz(?t}^Ǘ(4DࣉqDCF]8^5~kicc>YB9D'KMSSŽfǫЃw’IǡWU|G-&,o\5pe #rֶ7dcsys9 meq{JȵvhڡC.ȿtzU8zI2bwK_O.2TW 2CY3 Lj||^v X=R&ÏdOuMW).^ dK)d JmJ8w߂KH3Gr9--{1r1 P)[#g^#Ȅa7L]g~f޽r'6jvdӜq?~޺s:[r?0^+iS}f,0wtf~LeVb(&"r{1% qw@j<{q%QmB >[cӏ5_DՋ`{TvIc1TUtIG3Зڴ݆6fLmVZ+Yn<@pDH2p4bqVDD6ӦEw]|coNd{Sv6]hB=ujٽL{DJN~S;ї}foǓwS;B zmo%ꔁ!%3{.+Z朲CS#1hN]^q BQ$j(gd`l hN6IUvp7y}{A0 jH!߅Udxw{}?OEgE! &/.O,]A~_$H'ݚy&#i[yu AD O_ߜ h<5"z/T?^U?\S?wש7Y )PR/=z7i-dLݧ_?MTDVIJ8޹jC0ED7gy{b&fwFR]"iISUtKt9DX7GG@l/{lCtm =EXK*w9G!ֹ&w,Yv` ** mUzQ8:~( XfugQ#xb4`9(8g|wiƣͫ3%V-,ǩ 2v,P.jtiKaV{~2C}kB$ϢZ(Lrpk \`jBn.f#B6Q|;ϷlKg=ZӅW#W'^xD|T3on|xܙ3ZHL؞&ڝZOH!lga;<{F,>7-<[JHu:fw`oi<5ކt*/>GK,J D(T_jP(*VSlm~fHhV%" BgQlA)B`ytFqqڏGmC*%C " If ]+{TWs `ލ/2l|#U)8Oy?n< !-%ic_##iIFrӝnc#r14XMZ0XZ98e"YDM}AGdUch1H:lI!:Dom` 4 afn Fï_~s-,){؈D__1_d 0]27n-6-LRzt V i 퇁;$!T<;L-60׸G\M6~~V:2*dZ1gsfp_lGۊQ؅F Po?+a1GL*㐁SK7K^F]M4E8eE{M`OcXOdR zȐ ^BNҶYn=]\YAQg_bӆ5~ fiqPu `UD2@i($GVPu-;:txr7<)n>Y! Hs^EEcAK gykX@7G:<](+<]|E:XKaf\2Y@){(;#LuN/Ĉh$rDGv)^+]h(t)Ǽk-^Xc1Hĕ6qXts(!@ȍ84`)"&9_3I$UAg .1EcJ !X1܉zuײ,HHl.1qIN! zVײ.z<#GD/! Yr,$lVꈑ][oF+_Hŀff`/3$ٗ/ͱ6d~IEId7)G@EvW]fsًc}nFon#䇁#wͻ{vl@Hj)o i haw neɪ0nGS-ܮ޺kn1唒BmXqk*m,tL`!;J0\:LBYXs 5CP$H8h zFH4, Lm,wx[wO1/xQsɶUwϊ?G͟6Ehy^l\tb- |W.ՙu1VhR{p$.[/k`5>lF*E2ggBa@kf2\kt3LM3 8}s/Y(r}$|)jO,|oNgҧsˣMEbJA)ƻb ;=k91٦6)~6'>u1%H&#=j(\ґȒT^X2v9G$ %A؁@"@q˘$D4͒s &iBD/"_%Y^ ޛAXJq,r!l0s.C MxΐF4_` ?4p'jЈ bGxyʡ殇X.P)c Ivg&S /`Tp]95BSPYhٕ[kʛbP5ZXB@*0f Ƒ5D1EM|̩TXecD9ۿ(daWƈ+;2f=0-(j[ܓۣWO^vrN@[vb8|㋩/(cs\IL腲5 spEn<y*4KEMw#F ۃߙo ken;d'*Eei~`J ^[@^Z.'f]O!H CyC|PjBS=0*/ox4e ;.Rw'?p>Hg7xQk ?Nqƣ`~O><ď.ptxO*_EW>[6V?SL^-8{AsWvw/'S:q3rhg,/5A`}馃f*!18  34_0k'MHprYڗ18Pg=X%s:|5H9V nN@ z4SUldjTs)5ӪR*N>\,eSJGD1AdWr.-/`(k%^V0O \l$0G[F,2Q$IvTsyP-bOj4COv5` 8"mC%2NIGR>_ U wiyt z %((eüy38Gѭ!NWVz=)s$:G  >53vD40,?84H sz\ᘜ5䒣 @Rw=RQDsYXj@STkʎ ɣ=V.ӀASHp]d fQq%O%9sx<DCȋsJ2$hA@jϮ͋;T2F֘ O;1Rv8lݤmVfaOXHm뚋壝D7w&eT輽^d^^^}cVW5Fc97tuOW7V28re7O eGǓH3M}Ugc;?6:Ug,_]o4G '$0,T9`sa1,F2&g"qWܮkW#2i2e%Op^m+,jydm\@9 S.vɣ4I`C56<b,#ygh`3:qlH^{K2LXʎ58~QCsa%uNH@ ]^ZQDrc=G"Yaل2B1zCY’$+q0fw.k;71$Y3PB/ Z)LH ,&WMΠ~&ZBMh £!ԼXBCI!!5B-(Q8 D1Nx֒PGaѱ=" jem=RI=XINI%#ݙ0ݛqOTJ!oj.- `tC&4ZE7zq3-eȜ~i1j:}!:ÈqUa\0ţoj߉߸8.9By=iDmc %`r= pb#1j |R;yc]jE:mFf ^6Fp!6S6S.L ^$\jZg =eA ۤ`k~8g"x$`騂KNx.&98+E’wy@^@KS2;<8=d`]yrY(\asl&5;_<ouBpM]toIryN/_o}AFU O =GRʉaOy TU^I;^TQΝ4Y孾FrS!&#?p%ᦏS 0zJJW4Z{>ٱm8Jnu~ͣhdz{r\N>luN*jQ<$0a<Т+c䀊c'pkpTP?QM`pi(Mq|)^癤MSvȵ ڟ<<3ȃ:T>[~S9{L ~S+UcByQў ^DI0F6c0f/5cpeKͪHEexG h[?ՖЪg:We/&M3f13_XM6|W ae 4fUYE_{`/CRblS|r喠{lnG痫8i;>.ڭl'Ph2j]I9{P.~K)EnfU,/\D}d n;m=Vzz'#OBZCo`} /\v33n+/\D[(s0׵velp[}u3#Xgٞ[}|X 錵5DZؐףi zQf (咢saTE`jB z0 T,#X:b1ãH5X'$o3w?U\ w`2Gz*Dv{/^ s 3?|gnsLP]8v6vta2+??BQyo&ُt WEih ۏvp;3Y.> !Qhhn+= HF Hc%i09/-!Lm3&M^ 1l: @'/=IXfAnF~Y_7 tJ$䟡`)K4> 楽 #Cs ztϙ. J5ZmGڹwvf%auIO56,E_;x'FW5cgD˫{WQmጰ0c4?Hɇ3; ] BC%wF:TsgN_Dž;9KڪN@9NhpaÔLJ_,^X%&!x9с`80V@dhbZ|nlhě`}@h(`1JdM!uxCά sק`*/gClm~p-?Gz /_OZ;k 5Z d>sssW&Yn('q-rͤ˱:YPk_>77p;A7Ѧ[Cl5骰7&[X3y&s7{pj a,6MHe8kVH 3-y4Ktc(>h׈DŽrWxLQ5LPꧬڵt /_*$$1Dh1a/mDdc=5Q^D㮶Δz6Y}L]g^gXYf[l#"?MTcM0 ؑ !B ǫEɃ9Q~BZ l)N;=z}r:';Ԉx"8 Lw?QMLrK,J26UM8}؎(J3tɦBe%qm=2{pPќ'6=z>KLY.Ѫb>qr[oۯ7MCP@T߼3~b{-Q6]=,eUQxz5cB3_$jBPK·8FH3:N'vzZ 9awKT'3p'KK*ۿ{f/=n3ߩE" <~fK+)f:N*&5޵5m#뿢˩]jpL9/Jfw@Q"5O*F4A餜t ٗ~Uo`@[@GU3O{>$'85NoJb$ΟѯmPʯuΓ+p%8ʭoc^!bOZմ\87a:4q1b:aU~[ ]: #c!;0ƅ'&=qїMQ$À$T" @ǑZuWt><͘owǩ^} ۮ#(_s| C0Ͼx pqbxj.O&r2 {}'^2!/2K<{ƸD$ewk >8MEɠTGo˦Suڻ7;}zU5({PQ|'M``{&W8AwjN5M-DNUH@֛6u L@:J$l;@rU/1bUo'RH$;8 ݺdS!g=vϸ-.YX)I f d#e~`S7СY! C.Yu%'r/ӱXo8'B  J;I TA /!0A nb ÁHqnc5~yh^~֕:$ܰѪޗ+j $U=I&s AL\TJe6y&c:Sd7UOdT7Ƕm=R%@_``ժs=3,{=0Ì%?pʵ-於?b̮j6(h~HsP>V;}zuF i]<e̩5R-9$iu[4A,9s8mql khK:rƝ6O/8e 8"j7YrG " N%jG db(#)e\#ʽLz-(Ӓ+lU{:vg'*EcLu?Lp1:rz`7Q~^;EZ-^>O8Ét]c~Ԍଣ6]FݱC&˻~_~ǿY -h*}q76l$[A:kFj4 Anj_¬+`ILyc u^ )g(IN5*wΘ'u$S9vj.Vۀc]=8l6Lgq"2ֶ;ݛKhy|hإ((s6s6D17:8I=hsm̝j449FYX!=w:+AY$Q{o\DwdJ8ʔqB1V?9Q,9$D>-w6Dm `[}obʴ{e+ :Rmrl~(]Z,11OJQ<+M5S/m‹ߏp*F; SAY rʬ?TNA,R3[ )J{ T.[Gcs "5u K:!B̓~f:$1eIJ~$df`ՉT cۖb6<bNVhov-kJs;D 7&6{1hIN^z(~HCzT .7$-.CTe~u<#F[\wY&" `h]PZl]=Rhh XD(PQN9022mkwQu;! eV1Vm^0Vw(~ƌ@i$X%Jjc@D($9_eܽxygǝ{o =ϸ~KIN;8_}`49t?$c\֐<= u.?i{І:#`y <޷IDC/@{M#yC.4{$lnPЎlThEfwkG(sª +>,Vq@O@Q#Dr ˷+±5ӻ`=3H[V@!gdzk1{e Xj+ȅdOQ&I놐 _W1ʢU'`+;Mq0}||[+Voߖ%h1 8$@S VHGZB B Hig<\?olv[]V ;dתJvwˠڃ,Jɞ-4f+弚heY>V7q s j2VC_YqrN(k?ĻU?xx'x!^Zב H\;B 9 k>CcƲæ&Ԉpc!U0DILEaB0 4H',Z8@sf%ƾ=MX )A[<6F miSCMWEnR0r .lٖEb3C&TcB"C%1>hk$2WԒ $!J&f3IAiv:r Ur!y N9z|6v UNPU2[c>f e #Ul ݃xY8R9#sO`;w z4+pcfa~3Dh_3WCc*q-'c ͷ0p[DJ~9ʟ쌊/3K`ߺ>s= Q Bp:u= QccyXnPB"\ђ9LٸFR#`KPFrLٿ {Vm7 I.07~0snfWj8 >19%:Gq > yKKnd×mC\c6qFpc@no8Yq&w=jjpg6=0ÌYvY*I`~("YT^;*WWk0j% 3*58d0L$#;Xۘ$tX.O90x+'*dکgFE8^d~fhǩ]/=c]I0` B$q Jl?HeՓ;֫+pP 2j0ckW.2)1{<5'wh@A:F-81Awh{K6I_g;q{f9W+:.3c6w]f2G1&HϞy}`.\ l8RY`jf$7h6Ѹѷj4tFR5W5(!9VXuJc0©hZAvnk_oɒx^̱ ExKxƕ";(˶tSЎ~HyIM7`۫oݭg@T@QOr93 /A3XOz qm7AX:{kfxsxЅQa+3=EN"MSWL`_k۪̥ŽBr~g~¬6Zt7 V_XZr7Q_"Zve7Amv(s0{=\9ܑ9|!ftc*8#$ȹ<18{e8s가IjǨW @ڊMwb㭀l3$fR R4dDQ&!L 5$uB@ (W)P&0AMfTK-_m]Zuf-K"X $a,<ovgM芓]XrRMkۭB1.$O4b 0'{ 9/|fIlU.X׻Q?.Qtmt_2'Lugq8Nmlﵯ`$o6Ǯ/A$$:w:+AYS{o\DO)$E '0q89lLKfIi:ͅL$FVXCgz`jlt"}u«jFBC>>kzɏ^[XLO?Cuaz=)siX)~VoW-F#`z^1O֧ _^Yn̦7uS0k7_B]=CxY^76<|6[YPX&@81؏UN?mNhl,ßop[ k\xӊCXI?a6u,\zA,ol;b}pCoF~:]ۀ7h9|] at8{SyoU*67V0=uH; h RcA$Fhh鮩rFPQLuv8%:.13*tq4AXmep[ ioBqqKZUW]-MH S;+ϖDxSL(16qd٨ΎC@2mbyT"Vs3P(QdNIR *HKS`j9%$G/`yZ!5ÃkQWวl05"H3ً5 I՛I<,>v_fW?9(t91O_!"zLJlv")iV߰X\KK]5xW{}!^tJ^<Krp IjwC z¬6R/}o݆j[Řb$k.+.yk ;pejgyF \Z])>V I"9·9b2aN,p!=rPTHPV(Q[/,sJ3:(aI(; `( ^t̓dVBʐQHl z ,Qc0WA0=w1a &jfuk8E9ѡ(6SC)}:b%$%7{hMߵb9!%\K0OFGVL1DSH+J-Ŵ1% ~鋷KJ(Bю{ 5M'W9A(5Zi' WA79j(|5ce\ȨN @)l L.5Jw.ā=^ckTqR5l~*/hZ'X!0)RĬ$zQ*|@թL!K]/r! ./r c&\S! ]b*B'z+qʓTU螰T#t`LZc~#ɑ,Q!Q" ٥Qp"fz@ G UBL580)b+pY) M0Ob0̭R^)\л-)%p??U5" JCTJ\*_O ڂJejĉ?>m.U+)͒~OcS33f 1 Ԏm j, @?_h~ _ݘbw1 \*:`Cj1aaᰔ "e#YM I4BxJZr '眇)'0OnQ8K.aR,ȔabAIb,]U)W 12:" M⬶*\γ?4aq{E!E   'a4γ\`]s"*NW`4iPe^|w0KW2D aM FF{bQtI5w;;9fj'4;)`@NP>}qp KsKCzIڨ BŎb0ǥ4ĝ5( 0r)gL5"6{V2F I:3\23E(BHY:t~]2we$fW"n54WgxE[87H!li-^sNQ:ThdJt~%8NGl5PɢB5\b vE^5-RSs{Ѫ ]-Vdqˬ3]:$V/͈I_+銘ri-*肊>}Њ4Bt57 RwNȴ6NaZ[ ŠO_p wtV7]>&yؔJ,VR{3GqѶTu5u7=0\\Ź5Wu)] DʎʉVgCjyх#YQ:SBqڗ` 8*/d@ 6B)ꎜ GZCtʊ}a4O_1 8bFb̲全U,}zoQOUONj/eO0[,,MRo~,kU曅VjT ^ H [|q-id||;c7ϳ-}lh>KR'Kawk 6Pg07 an]pk~ Vےru=a:n|я/p`W_֯~6d=t>Ha<,7)SڱX]sQ YA0=S-rPKk]U;Љ`NNɏaLLㅗYIH!JAzuj !d gVoND n>[&Gr]z^-I^+59,Ë .Cg4oݞ?6܉!gU`'j>+ϸn2HN'PEk^C,͚dٴ>C,_Ja>[B֙'Çl>t>3Z#o \ځ4> C.zd;o5̜,t3 #=zH] e4۷K/hVmY}0ۺFsPq6j]e6DYb`>b}VR86׳M͸mCmuաS~mwCaAh;z[ ǵnnhc46i{|[6t7NKDuuN!:?$~yrUB`ijm8<1$]FDD0Y?>ƣR4P %zfSu68dI-n.ӳN -=C[}sm\Ds*oJ1U=A"=&W,ס}KdxLvV,ȾKdC#ܲl_ޤ._$'ٙX9 z-uNx$yOoػ֨M+W9㧔&d\iYc}z6l$瞭Z|dHc]ޓ}*E WTJl$]"#T.U3)u|h9 ݕ%t_";3s3˭ͼDNm{5rV9e'"=FL-EɁKD=Z9zAX%Z`*ր)OV KdC%g2MΖj }` N? 4^=(k4x6%)87ąA_jq28 GYQ6 &cZJpgfdŲB(CzY묀h܌LW5'f桜8 xsoZ~Y 6d`Z/+T67wtU.+@6P;EвBрY0\?k[F89J3-jyu=8u-Dl&Qsӭӹ|BYFżetUx`vr6u@':'1\0; %Trd 5=< P.}6,8 ֿ쒲~O QIЕbf~vqӰMU  (?t/GNW' oʤ(a%5=u\#䙍s =]&h(V.dOPH.6[Cƣ\P< gIj]`_)8 AR wzg# B۪Xbv ɨ矰Ύ o.3)䆅+W\4}r0CU|d#ACƏ g8יa˷209úA91%oOuw0݌t)se [Zǀע#923qЗҭ:I%8DE6!Oha~_F H-g=őMl?x*A*ߓd |1OmL\݀[u@wWOx_[o|NH( 1dTB]T1߬F 7Әp\ 0V>q+ џ~wjL3z+7We~DŽ3=S>0<7ښ`8S28 q ^ED<ȫQ}| r $̻i헩lvBGѮb^~xsR__]_?"f\1K -?!p0i~G>S)땕# 08!i0`ƘLC(-#XZ]mZX.aX" n>a\&|=Z:@o~z„$a^K-|y0z=7I˸_c5-88"h1\Z}h o [ȆV?щ~*mOMPM#sncj&N]}Bxp 9Q(`M+Tw#ۑ(?PUpjmTk& Lk 6cKi4;k QZGCZ%rƔ. ?g@aG'шqPt6"a{"1D7- [-C`bXE""@Ch1Ac\;o<*"m4Ȉ1=hqiJY`Ar[ c RF`0I/#ad,wZ)&Eu3Bz]YsƖ+(L<7z_\RĮuE" %Al B$H ldT\KI\SG@.H |f4 Zs&a spZPr ˅FaQĄR@P%'Wr*)r-侔"'%Wy\|7KOS%7Iau#4*n7Kc%diUFb6L"1$7Z,>+|6,/]q?|4+[];1ɻ3+9'΃3JVIfJIETaL\TU?פ_lA[]kI%?yow#L{j!J_ZgH]ɻU8& V)؀Sr!AYꑒ38#Y;k7Zm\L+9Ŕ)%GSF6.m\L;ڸJvq1UhbŬVu3M6.Hm\ :ڸV\\ɣ:޸Xt~BL`m4A$ ӹA ᐈSGvgHZ<޸X㍋OX=Xb}qhbŲ|ݸXkyN SsR޸{8-e2΅f`:l5s̓FKX0AH 3XC[J5&ৈ^ Ll9ӌ= ĬζK N2%p{z"`4,L8s<fjB|-XyBPA!T%FING-=.[_x{1J/mwZ$݄WiWL•vڐNQQNHur3 qvԐpwjCݩurJ;J(㊒LL&]15 ڝj >xWj;WjqAN> |z͗brL3Gg62Ǽ,yZ|st9vdZtrX<,j6Uvyv{YB=VGNj_/Aִ(&23Q_&;ńt]U ͇/JQ^ڳe0[0s#)xݪda9cfCi>=QCA3)OlpOǣt}SNޝy|hq4E" 3^1Y\B1^A?~ .س [Pa !7 *VO \9/y(Ԓ7V:7't:˾yg3Ozͣ KA x~ E8) ]'#V<Vq z9uxө>a^ι) #C@2SȨΥ{v9K@+[XhK d[KXa]xnXq ,"kǤF^QZυJPCHҞ #c,:gB0e#¹h&H-,1-Jy<|OhD88u{(H v bD>CD G {A憨7dSAbb`fZ$1ʉ!BTSνC|) n&VRr``Il!&hc9ΐ ,` V"wAfE܀4K`ETŸLc0Am@jBN,>l) ZXL-`Z I@@Azc' BIl>3-<̽qb1XmgM"\.i WWk/}+8Rl|906^>zĭ3b4=y2g2ƴu}\߼?|`Q^z;T0׳AD ޭ2W,'g޾@:_kz9Cz6w/Cy_;c(nR޺ߙe9#VJK.Vwή:a# OW_G) nC~QO gr L" d o-MSy w°ɩZ r /R4Pa "k2o <6FpA T*-ZY{17(*q~NCZds[[J)߫< }S_չК+%Q8P[P-~սR7K^!ezfOdž Kj Y7Y}6]eq6by_\^y3.b8+,?1B pjaO-}a>u@TWTb뒽F9Cw\N<HB&*0EU&^yM/~6Zb@2}&EŒoq,'+.>7"Q\p/^ : +XOSiʃcCLB)abzFmF5gw8{MC=-DqzP'dvKn f.![fv(^~5Ar]~Y c^Y/A-Nc۰N a/ZJ:DΟ%A ,ezbZHnVj`RՕ"2&Ī:HvzMqNs9b<2"@#H c x;AU"|] I5!ϩ5LYjMi864SA!3N@8O$~k'QuBИTՄz3~1 E&Jw@t9@J*g}1 S)0vQ5c=H)7Rd8E&M.jf;-%*Cb4.=n~f 3qS,&q$ٯ(eKʦ6R~=vw{[ 'CzzNǢlڮ~'oCRl|k8wӁ@^3tXrӕ}( QlqE܃h!^@*+ş.#gB>G6+EZzexs5 uc~}n2cc|][9_ϊC7 i=!y2T\`rJx?HԂ#YunOKNRO#X!]Ke%wd=zo0?a^]}bĪգ ·#IWdJJ%9~woRZ܍ފCY3H_в}'5wl^zIOpڴrQݰU`K>hsrEJ)dz0!SNciCcs-6xcAAY!rƒ1Jnl lLΕ$p*3p8&tXaDhE,',*Sg@mfIiBAZd+v;'m]s'1Ry۞ a\x2b8IK 86{`53!XQ x-Q=9:јav{V6 [Ea%A0br6N%T6J= uO+ۖQd*[QhNNf2pz3 j~G~&(K8O Aիu(t@{) K/U'oO?Mf5jYofޮDӚ z~v8<}Ohz Ff0r>rWRI=齋q2N4+Q^R{Wy N /tIu+󫩔.)qX<#_ ^z>֗o5*PWc]0;2HS#uYl4_]cܥ0+h]` '0uO?o g[P?}w_<$JR`g>I^7{* lW[m+qՄJ>\mūn]7h]NGu" lߞM`ԾfqbG&(x:\şrw I e'ыrOff5Ճgj{_%õ"9|bqwH\vB`ɱ3HL&+$%FllGA_*KGa6.2gn vXw~~m3vQ-O?-h-wAԋ5$էK'l) "0BLpgk Jٞ>֑pV)& " tQ<9Eh|fJbhʞ>fǼ%KϴāB09'A 7{0FVyG8ځεD8K*lb(&5Je!ds!ApoU}Nb!`!L3%AϖYu5pal$0R?S!ZJ#Kåpwf,0>6IS%wRhJP. bstLbqa7("GJ.%[mrTB8)Vzk7ϤPR9E?7bFiP^}rG˪|%?})'cj)rBg7/fH Z0w-A*1;?[ϢŠQe-Ws%$- ?~MaH % |{1vӻgeMAAy&fۇV"+Ml4p52P{!s~+Lx`z.x++-(gO\w떹߃j-g[BzTI-9c O3ܿbkY0ާՆ_'ŀVf}_pA%XInf;򄵦a&מ>^ @^g6 }ޝY_%Ͼ1|k:{WqycH@]vgI1՗t[Jş]# ao%g+t#$-[w8ӕţL1SfpS}}Fެ?]Tfse*&˝#>˼L̊[__KBJ;_#e) Oqz|<*̠ze(.~3`ʄWXaJNF$!oǣtfQRpU 6ꖺX m(*~NhN zyiND|r<Ļ}p'J?+7qadl#BAs$UUϕ\e'w, -T-H~hǏ!d%eɴeD%7d!Ƙ2gO`EfɐLQCsEs.*@J@JڇU>'Lΐa3MCn#&Y^ ɩOHy^C@Oe_JwXKi +O;^ (BIPW9`9 '(Q V > !it hΐE]m'1h;AI NAUnv=-}cB4ɹP/* N ENs9HWsq6?Ǜh|ŵب?jћQ 0 "yJ^1eo~(-|׋uѺ B!i_yg:zS}q4'[zgc0M:syϛ9րEDtmYt}|^RДEחX+O2ug/P* ]g[$ԝ\3,Ko(&U%IL-IL-TS ^@8A fL^V( PY֊k'AΏh\7| TZ>I*{'1{'l@w|a9k].ND÷5M#eP&: VWK*sJ3,p)n?A[0 qK[7WjY гR8'FgI\5JWdQnT("сJM!P8I4Sjm]' c ģl!tJH_k[p(Ԕhs:3Vh ߛ5TPGU9bA+foDތ:cƝНX wH%3ܡcˆ>Ԙ?aۦ!RrVĜ+)nNFk;3Z^E'`$DZ{?g}S!<_v2YF&t%1M$4TG_2EP9Q`@*xK8zʔc`)3׋]N6N)~-jƻU>*S,5DZ?Dk+a6=?'wqyQ`nY%!tʵRTvBV^BODi%:,wo⩁`0b6y! (,”0!D1Cx q9z1]/-ibwI|»7n{6Hb^1;AI9Xhp0BEx791HP;C$C 67A)LŗoQ#뿯Krvanlw] /!s(:]d&E ],_UYv]kkpl(F(x%.CHL஬k`eȑs]9ʙhĜl R9>9F\hFd^0yj1I@(wNٯ, i֘i=}3<Fי9{;G]_Exm-I򮗀K2.p҉25\sWگ)ZAP)SU̝EzCAP; ql ˊ2'@BI\\a:&8؀ [SE+P).cSR/&M>*ȤHNWF"8tFwK̟jcgw z鋗Y?=֊CoPij?XB {K|JV +m'lMYo`o)$!}9buCݓJeg uTx_}[-gv1O%\t3WJ.AMerM?vi!?,lO-ٲ"W'}wwL֎k\.HLMd^2?=8t̨6mᓩ(VMCQx"r 3<|nZ#9vq?G;\5e WL<:P{EW{tBCąuž$rLU_N%g$!RX{JPeXd!sQ P U.BpGd5 dɵVH+3!1}4MF {ou~G*G*ǸսӇ\xg%}HM;/]ݨWjp' !z|TT璘G7yfYIJ8yhBItރ/~]0*4.n{ ,98_e+ZhRjXǽ-l~VaS?\yoP7pdmwn/>܏±(k%Rr t~a҆zLNїy"mRJV~oqɳ^4&ZU;Vf@H"[>Ϸ#E_+)"'aRyW(?'+gK,dӂ0j_B-HPr-,<b ]d~ɣPnqQv5kzrnZ)o8+t~m-h;?{Fn K/R*?$^ljSS,\mʒCRY{SҐ" %\o/׍^tcx o'lg_\JɾYv;Va RcBr4S 꿫6>oߜٛOxwjfN4A&Eg<ۅx#qH|v}h$ J]%XkVuo#X@v,[Y&,Z1|^iu̽졒b,cGFZ25^=F[gjo> fi̵훓~l q˗}3ҍXo 0hkR2b& P#F: "~Nc0'|6oGc8 @c>\څdZ {ݔt Vfy8DC[✵q 3-眜):>ADߍ)N~HCL߭|Nj~(^#nrR2PsX@F:4(KILcz7#Z&n!FՋN6aC+:1 ׯMG#fys!G&Z/*6< I|z@,@lO;U=+7i6Yufl1BW%&6zH\H\T:N3eAKEe(6Gmb"~i98[EX7U:-^LPn+2>Vśeޣ~*֣哲+ɫc[eJ0xGv03$ͬ7ޝBJrsyInҮ]Bߣ5 8.|n9pH7j{850ltBǛ+Lk{kѐndȶ JHC qvh<]ciih{xa|TD>Zi2{%і`Z%<PtmM*kfdq j\m2a}!|Za2-Ad 25aR= z^f3"gQkG4d(=gf>E~b6m)#ܤ[7j{cMeٮW `: Őw +c҉LppLKz~߶ ٥ jSi 0ȌvYo* ]oSA .5[hI3($^jz*[NI,!ϛw Xs>4w#z_H_MLJg.t/!|@\jj.Z4'{Um`E=Pb6іL̑E'Mpi1PFpS;d'^z {+x/2)YVNWٍWcgKHZ5jLhLʏLH4!cm͗z#YB<ߍ%U^y(]?ea `WZ4EX%ZFo |,#S[?_V#>yC/5G|.]LA(P7 dV<3dJ}.@s09&eV lYF%"XA`0[ \D,PwZ*.>ٿJ;xql X!u6֍룮e#+Z&rv=TS4>Q@ּ0Пb[.@}aqq b͆*>y >dR}[Y\^^ԢX#s3P%8-I7B\,1?b Z=8wn y=mr%S_\RCZu7'w]]/.t'kkڎSUV'/i?_Tυ9`MGGv6Ur$KE)f.EVIz/.G HoUӜOu?&u:R濛,kǑKκU"Y%^!лSCQ*<`[eQj7堅x5,ƪ՛^u V|݈o* ݠ&}JOnݏ2 0mj˨֜|ƕB{bOk^;?7eqZ6eqZxlf(2'H- YZ8UV: Rs6̊o;u|Pfd2q)=aM)#٨o4ّSZ;% zoڦhss*Z6k7oZܼiq͛ݼ1z&E1pOŠH QHc2;#LE<Ԙv}P'c[ҡ Rݕ!Al+oVLɱPh[A;=z·`%cF:IAb/` (,mPCל'e8=]uO'`2yRo&LPɔF$ҬHpSrETC 9M1{$DɉFhv %;Xu݃zODLev7{ae*u.Wk+]kو4k7WX\:iZت܊4 POB"Yυie+a)q"͈!$.GZVԇi wjtE#ǘ٫Njo}rݽ|-WffO!bIvNp7\|(^"|m~P|MR^.um;xZK5ery{`!Tc,$O0K\E1Ʊ @GVL v"4$N8sv<R 1`9Wykeڿ?aK*PI<{w.0wC}4;wf0qZ8-L֙Xy0s呂 PY)PNQus5NߒPM0AߑVuo|m++!PzC92*4|}I:QiXaL0:EcUafbfɉǃw 1sG@%FHɞdb*aYFDQ1c* Ƈ >dcdRN [WnE(/NU$XWFٕ">cV8J[tL$+6*/w1HzOC}w o3ëAYƓʞTBr2Ii(2A9O#(I(d Q0DKYsPJZуRxibxl~Og-uc?'%+mO ♱s ċ@yx>a D\T&}eR҂5&U3t0c*5nSH=*2BX&"E/o2+Kc!E20;8B̔Α,S䎂i_` ҡIj ;b&b (侜@sC%&%!Ч֊%F͜wL#Lr\ˌɨr 1E"q#Fcu rJJ# :bQDyH =6$w<ϒFpIYi 󑨆wzg6KS"{,&EN$e=@ !-#K rcSTXK?B$L T+ dvkC3x`]ɁMJZ\&*jXnPive(+H%iGvY i(\C,){rqD쀹vǘ[r)(ǎ= E! ޜ=v?YYOŻ؟&]Ϯ&^iVkh1 w;z+U )/>4/@3c>%nMD sCacY;%aSbLJvGHYC8I!PբcuPU~.NZ>`jCRd{!x@.kH.0` 0!Bbz?!ݵ0-:`Ztt-1HPlHsNqpBF\Tl*dOLQƇt>!ݮ@sIRy7 jbvJH ׯL-߽!%'Y˱3_2 #@qYH,Ii/{Ƒ= }$`q. gio#Ѧ(e[栗!)=Ӝ0f:%ziх-Ir#3KP.ɞIZ$k>~*">qY<;.k6&WW޾-s?R'\;}+ KGv%M糰 2IXA'aa<0S1M¢32*-83B*L|&a fȈT3&]V4d35g7uĵ. {}?~(0p-1G DŒ7j±$?) ,AP ʢᨠ}&N ,߱7)0e#)IBM"%NuH0\h .ZnomzZ 04i}&=pQai^~[yYI. Fk_>O7S=˧8[7:gn{ 3/\vGgN7DZk_'wʛ?-w;Ψ!վLʠ,J MԲ*/y|O1myj.l Gd&ňc'i}|i}|\ZbIlF-%*3Qk B:%2EG3Dm~¹Ee —k_6Cp^A?d92#<ӶdJ;ړkFS񭦽>ж`ԚaŒf.sa2y9jլ7\TєrrO敗ݰeeSKDtW~x3 %pWw to=ӕɧY(AS*X#\SW29Ĭ=}H{Obn88织8QODcE.@;W2jWU-̠ya׶~~kD!_$P&폓ht}v5ԗ_< Yk{wÛvv2vo~x|ww3H ֤Hdr4`Tr$rЖ`9%`^z夋`4U BrlT\J*d);` :lW3-BCYHhA5j(Ae!"]D4J<8%0b$BJ 3VYeК, +n?pmw$+97H߸RCrtWAa6PY^Bd@_={Xۆa(b\jq*s{sN&dޛVȪqtoaMop U+|E(y 97,y}eN"㿟cI<k)M5Oݻο]mh> ϝűL . `NZLZTŝM#*,Far 3i JˎU~X^ V! <%Ǜw 6にK"(Y2^"~oYbooKԟWA7f_pϰ"eJJgRNߞ !}EùAcK-&D5[ ^}̯IozjmE+*+ϪB$VqvP؍*< t|C GtZ,LJݠVMԁ]X|dO,&lVzaEL)F]D%5@]GV;k( Q}* Ђ̴5#jh4YtmqϕFJRРmO5jaKԙ F\i%July"|ڶq#ԡguf#7ŠU{B|<8!ӕ&HO"K~n"8Pg˔kx/Z+{Qbo̔8Y5kP :" FhAEyC3OeܸZUk'~7VP⛪Y9ٹű%+!C G6qk(c%C-ǽ-GI5E6ԱL{y%~&7\^&H wX>ܘ1"#HUJx"]2FWRRi/t'F5g7$j`#'6fS!b,j1BYJօ1G*#z&i{qr=%\97J;e=x6U0EK9nuAlvNyd$Q~UBNf)%DZ Z1e5k&M\+R31(K}kiZZD!ࡻע3"P|>1 4v5-;-mєяNёfBu*)##c4cM^%RqMh^Q2s>LLn*d:::b״S-s/z5dٽ Y56OݛhL/vLP$}]I̓EDnUpX w7dGY{^҄P1"4:py' QFGFZ i\AiK?4I7S5$h.\6WC~*څߒ\Hn9hRjAKUoix$^fTlJ$* ѭh pJp %&XD$`Mi)ԊG5I(ݮa1rUmgAcl,hN5ƚg#<u]s޿r3Gه37vl\:r B~]ϓD!IÉ; ]8y=-M(@{"tlڹ*nϿg ihJkүeYsPz&̡)0 NtOS0t ^x'r_jFjm0RI8|CRX?zN> :ةщv.T]"pN[pݱbh5u&b+Ew/3"^㫙2,Q74bz )E{VpV|mdLŀw <*,eB>+p F|co[𨵮4ʎr޿hHUEr>~2'7jf.UE R[(cU;HnuAo;‡#- =2^ @eXЕАVՎw<%l03= JCYY)˧2\Y贩u,lS|#9CٟDѫ/&b&8RR*e,0^xS`8g {8W%4l KH?dhj6~zyUS)\UB2 X]uLj&Ti/Z<0asR rTXlИb|_'װW0q)">2zʬYKL*64Zi`O9sf az^.>.N|ܸq5/ 7{>~rF{uF&O.K.LK30b0$4 c$Ec׳B-TemQPݳ»ût?c^AALXӴd!ʩ@=…8Վw<596(c"V`TÅiQ9|#,b1axhIn.sb $7*m@SɊF+%A G saDRjl%%<;&MHT\h:RW&as;3E|\~vHʲ.KBZC:|i3gg2Vxp“c Cȏpq78%=w?sI{@W݅9/bJkZ# AX^Իcа㸶8L})En>km0g%`4B*0ʚ4k}$굒{}1Тsj1VEIBF-Sb[;Qqv47d a${};Ru$,`dI5WՎw!Ģ̘8(xJoILעyTxW#5#B3 u\AlXJbck-eAbuJ)&)UZwnrÀū;zDWNj>㗌 J afhsGKbMpL63b cm1.\  ?\vJE05k!j\dg?V7y U  ,zCb^sKrryn||(GUHah qGZ_~_J`fkP TKTOE&~PyA"G JOVi$ɵg w9N_RNS{\),pVyIQ6I@Fbq-xt^#y|+d͵6KrCXR%)!i޸KkgE{lt^FS^np}Z)SQ\ALKopX("Gk6J1RM%Cm#,xcHc 4a+3ʰ#{lcڥդ(1mp&ꦔM#՜FnT/̶ƴ-vI |ESGڢi릊ۊu:´je7C^[z)& mLqLhp M4S,=afUeYW:+6"0pL}r/6$4O0Sw\ Z*\5Z_ IInTWs Q )jϰ~~~NUl5sutԅRp ׊Ҩ5Uٶfq l[GG9Fc=K- HW#jKdD*cL`(1!Kͦ1|\A<zd vb {$S⸈ՋIf35Յ15\tJc,ػ._R!`qӛX:5-ܩp1&D_NP&dE5fFeQ84 Q1Ƃx2% 6t8Rh9҄tGGg0<#>;ƸU7q%~êȘ|GP1{oָrĵ5G慰eDU<9AB \_ɣH-^JnfDx\^i\kr)JN"To'  B˓LJS1g*M%YaLen|3̄BKրݜO8G{;iSiPJO{q&&SŚi~5xps¿X;Tv| ո\S?l$ș&?1Ң-֫iD .ZMQ5tl[NkGk@e-N~ kO N'xjiAor5o.m)578{#EY٣D hILɗm6#Ӣ$Mc<Ġ<5 y23xD!$sXass|yXϴ"rgaA*ns:YAN٨NXM /`>KPfΞYzjlua8} ftu[%閊2sב1Q[QK7="?b7iB)Xa -?7 -Ge>*׽]jj$pF b0oU.1Sx}u)nHQY͸r]Kÿ]]u^i@{qeX%…$H^86#-(kkWzʵ^(V(VMxKI%}G XI6L{II:ȏX&E[;UbFv{QT,J~>4ӆ&e|MV[޳|2垼q+5 H'ݪY Ś4YtMT=V8l *|%n:"13J3U'T"C *,u"C|<4֒YCW4F\D/U|:BݻN+ 5cҖ?'|IK>[.ᢆ<ßS)/5❚ 'Gp FZ ȟ)í$aY <xx`)#4XeN)|'yt|8e`ROH:Yv`zؐY`U᧮y_,,W?~n9鏻{') bZ_wg)o3>'<rfp^2>9opqݗoz;Ow^<;“%9-7%ny*ȯ`00ԥgdTs_so'iw{tu]~<|6w|__=?a700S5t1>xr:d{/AOs.e8:xMZ]O#}w|&_w6GH_ᮟg s^cp W?ΞL/'jt0;<<!Iw an}p̌|0^?ߝ?^y 7: ~K/ӏFyR:w/OONfӾг/pɧ4Lg3t {KɇIIXwr<|fFO`늟}ޟέ{-`uȭ~W3hu fHέNH,ޟxz:l&ϋ;.'d^/9syyV[p?uTA"3, ]gP/}i[XK<6c@ ׅ*S` (Jbgy>O]4ws /X|u*:X=A1_ijtZ/K{Z{$E ^(BV#nC*QNP*Mf5y1*e{Uݫ:bNV򽪪8)i=P;V@2.aP0ajA+ gsd8r(j-x`'e˗TW|e¼hucqP,%qDF0)&H lUa4%‹6F۾h[~kmkmmѶEVFo 9&{6A.o.ܶR"VUTkۛ* e^N3l#"v٘H;CX{A!p [p=g-nu [p>pt Vd'g)prY1d)4V X]qBՓY鈛 S9e h߮.M-bo[\.J 5aX.~<3rhr˝aVZV Xh]z/~/i]/ýs|[|ok^ه{d@oe7 'zňEmmRAeFNFAmq+ԶG'n!v:i'~p$ŷ},Irw7Rǵgc/ڥ/SvqLDFUk]p2Қ Xs.Z"qHW!YFgL O5w1;ӷj.P1}{;b_Ʊ ?z4*0AB!R;nxIIBA+ ke' CДa S52AJDI/U*- W>wY,P@c᫜*lF|BaBg*u53-wyD Ejlu\ͅ3YuY,jGljXIfk'C8(d ;R4"bDøW˞J2 &`H Z\;@k= S8d0 kBB`\@Ll~NzOAJͬiMTw-ISW_J vcx`i11P}H1w->Deކ u[,a$6 `ہY.6g&>Y ;بrܑ#kemﴂL` KY01[l@q1r¦rE gy>Ɏ•3vnYFv- 8NAD2%kXA6 P#WkLHe }$rUQeiCnzDvlzsH ;Hv,Bq1 6>6Bĸ+rLR8CPH.HMU>oKQ+;VEԀdprKʅoGgZRͻ۹]rU}p&p^! \bc fk>ЯדO~gm~Oqj6캬OǗR+5g$t:4NwжvR}+k璉~4Z;V?IPfsi>~~K=e{Mn_( ]L+vLȎgBz">x&hBbOHMZ}E#SSnjfY 4=K?^\? QЍݒP~s-~OAq5f(\? \;M#nF)Ӡfu[;ovx[;?/W'p;\).䖞k%`QfR`|Cz:D=Q[AҪ'')ڐ~+H#3c}Arig\NLw4GWi%VssX9Qv7(G94k5}Lhܝe'd3}hZ (puONʙO8݆>@+ks9^OqϓMR^]ve!Ö }JH38ǧZ]DFta=(g*UXU{1ֿy0DuzR>:~Ɩɴ~Lߵ⋿jo_/}qa\.Oz8h[g_YcߵW]UjoNo ۟_uߟ]MS3/߲GkHW&pCU٫V'Q:0L3=bT\ ?[#:)a38lCHk %ke V)M^5*_#d݇iϛ. e V/,`2%(Ш*l/(Ǭ Wx_C=xP`iN==5ʹB߭{4& 䩁Uu2WPԅa4d #DUV+a\rFpʰn1IiqƔw5|aVdCM1ok[1Jun {OAiQ BʔAgd'H`IGbmw1*#UfJYPAI݉{$RPQIk+eAlJaLk7JnRDJ%E=R#0:RVR@f AcF9ea 9!B4\1,AWzڄiLvtRL=DJpN,_j3HG r_YuO⟿^^Z*}0Bpf/*^g^˯_;@^}SQ?u#t>>?`_\ uYkuΰvfⰯ]PBM-}7l(ޝ*7!ti:j R.tTFah4EFx)u5 % ݵD}aBʀNZ=,Cر0bXUzwX!WHvv5)ѯк2'~I( ~4~%,&߹h\A!bC+P[}XMQTj?:6cA7kH5ŌRIWUJ ]"fbpЧŽ"ƤXV6H{К>}iÇ I!*各-f ;4X[HPlHa UF)ѯfɐ0Z5kkjGYF+ٮ&(7J#aaoK`/nwud p!OlNjC' mU7u$I!\4 !W'e{J?{ǍA,@||bွf$'ˣ"Ukz(؈=ͪ^*:nms+^$is\=:*)(iLk17`zRhnäF!&oZ @:En.z3VՂITzr,q;"ʃͶ)3(֑oE} + %AUkKۆԓU}}yDFKPA䖝aԂ'7[nL:9z٨*!0ؑ8IijzIi붡msrg[ieV)(JSܮ^#ejcB$"|mV[Hb®RtSJ BIDZ -\_H>v"AsC Vhfo2a@.Y\[D)w1 SK*Cб{M:km&PLDR&ym"3B)XƸ6 @1Dϋ|yJg7U }m{0Lm܆-A(]vk㱬x~8ͳ˫+>ίHJZ碬SN+:QWIxsPxRL1ۜ=rjZ`Vh2'Zqq[{{4_0R}9T]f}Jp)AK$!58,HhAYRP=4ZszIZzru<;XBS2[cPR9)u2hjYR-r)E,vbzjxjS{Ώ{.?^<ф2ޗj>}XW1y da*  EjD풕AN-ndPBIbv>!6@0)`(:H#~Ű3 Rk @GFAcD" ;oO[?"XH$t\4< 6lM`dsr\;r9mFTN(Ip^>hN'ti6Co4홍=Q miaY;FDr9= )QZFFŷ+~~  z8AnSeImg-bζ=ғ;ApvM3?> k6Mҷd4~OuKÌV|)i{kZC؝v o)=g֮?P87ZLe;ܠ! $.۬!6{ L"g"T$ uzHo66lBɶHmy&Ys S3U=kCNxZ:pFp(@teOR6I`<>)4m N{UTox޳噠%_t5(٪BhJ 1rW ImxH!DR!u7mxA\,[ Vw+(%I6x8:\1PEo;-#X) >,-,"lUCJmHM4A2/b PF߁wo_ 2߇"yvsFAr;Ybi:]j+d;l)Iu 6һWړi\8BgN 2Jy8{:ps|e~=ZLwWO>g4DvWJj)]e fRY3Xj \&]!گIwd'{gqĺw0uo??5C)G1y"s>}oZ[9iᵕ#x#|_W= /8Ydr5}|5tLP x6q% àhȽ>$ ʬ=$I,boW-qeVlϡk4)}W{ļbv~y db g1H7iGV;ȊpE<\ )S,v-0Fwk~Rt+\mh&o!o@IIrrbg~reZ\^.w@,|r-y*s,WSjgm) k*UߧAKk'Q>`(GYiPMM MO@uԺA&[qz>*TW%N VwzMV.n2-C/?tywTC>)f5XEdTՆw ц[pMOiwd>.{:mc4Lc%- +q9{d-U&NOrq=::\rN*ߍQbׇT(ssUoǗ|z KH341ӲPGB)ҡrbiVs>GtFנ!%_=YzBĈXln X[KGƎtR;;ŔKfkkU\Zҥ.\ _wcskE[wyӦsg!awWU$ x2RIP'|5z1RHp:{Mڊ8(PM=%>wub~kB`ȉ* 4#LGX}X4,.>BLAtKu- 0G|E1-/u 똭JhHRUQ\2‡|j;7p;Wn<ُ4wID{^pPUI,-ͭ/U ':CZr}JPx~~ˠ5*+pœڷ_::G*6%RS s$4=O{0HK*0S=HWiټs|W^|~6|k{.Nzsu/o'Eי W3)Fz"(J(ԱGՉ*/PShFy%{NyM|^S$:w;s^>L2B%\6~IpzߵbjXmש\WOp`&#;NwTolzD (V#.*xRRg͕H4ْBɅh[\UURi}c|r|M*(i{RH!d8:3LhishX9؅uETpܗzFś[LS\oJ1au1~0M_g CwTM>5ol1` r~hJM%Z0 ݢ`=vMq{dHkx~y>!EMq|x aqN|)i~Kؿl{wwS1GXbZL'ZѓHrƑ*ǎwBq0u{0:D2JM[5+P-%mY5zfL7uwjX^_|_76^P_?Lƴ*j#BEMDD%Y sBcEfmrB5ЉZOϝ˸43&3%mԚ[QĿ c(1~C.͈ mǝpeY|t f1#ۯ!uTOxj+;({l*wüa]9HP^^2<)=:T>j77UDhkrQRe(nwftqSeX?bD4Z\b-tS6MK72JBӆN5~/W7 (h˗;*l"H:Ͷۙ ɟ;FdYюҁBZmv`v ~ n}ɏ^hTMMҾ]eks *TK4:XA)1MA粿ivWfE/pp?,(~P4ȆZlZN#htiwR6 [.l4ޛmC%z!KU2nhQ4٨2Fͱ_!6Q [ jqR2ߪxCV}7oR^R ba4ъ0GLn,ԩ-^pvmf(őst) rirf:tX hy_\/DŽٰ9|X VP`LS؃;T4t \;: EAI B_{8`Tc{"8 UIzZ ^1iҷͤ- :QCdKYUf'(]E~ z#}7pvrooCM9gaTxѤCZ&K9qАs Σ)rKVQ)SyƩ28lʸ`>g R:t6_K>fDQ{x}x]_yl^f~x$aaL6d8VihvCvE֯$/oX1BJ:h킬D/2!1+@zqaGPnPXٌhB&Ԉ,ɚӻ' iI*~{ArFZIyx*#W^.͌H6ƩG) Id0̀ǣVˍFSrsezW@`WfWv?Õ.őrE3ڑ)uHzyjRƵ6s+%e%wA\sTc^>GgDyOYѱ.w9ֻ])-Mw-H`.yf&N4֎qK1 vg"7^QW0vo dpn-`o'Ba_`+kM?]^.P^ 49s;&$,5$K8̴;|8BrV>C2*P'sfi@2hWb0%5@aU_d026eB۫o3N%9[J 99F1q(5*?RJ,O\BA0IfѬ |ur{. 5ADy˩U;iP$¹VE+M IjXWpϣB4鮢ER0%{Uҍ/'~ƛrIeDl$>O6*w3NRFbDž&.ErN&A)ʄ1uvNleR-3Vo猻GRL{lX}w"8M%P8drg=)ATV/_6adjyZQ'_`Ps5զ]Zqvqh_&Z^ 7%V @jܛk sF4k`^yn*$jMEk(#)4C?*M=f @iXm9c\8S!w \f<FznK]W_ymuW 1@( V*Ul8:JU89HbԂR5H=ORU w/4M&W+OVJTpsuJ꠼Ё5,*%6'2BJR# r8m8\M_qj6p_?yT?WgXOq? uоFFԗ?|H_@1!G0Vg@o%d:[pv|y{\ˇjM/RBp0ُ g44F-x]N\0,D_c*Ֆ5dm$@T*=-CjƁ2|ߣ­ +8]ٰ2,E2&" #\%?}B&s=@Jʈ)j~IN3l[ChcD7If`d#F3K"3&P;TnIRJ 1:X "՞ :K%#ֹQ0[0"*@a 'BwM \gzWs$RΏS vnJӺd2ЀО4#zO<$7C%W-$r~'Kh8&`,TP߰'>hNˌ!23ÅVwc6Wtޕb^M'eP x^aU~I/)S=VX0v{*ģ:æƎ?{YCiIvQ`e/Jޚ/Ս9<K54bY1hޥC0h KD#S&$O)0q2t68zm"q Fnɚę,9ƻh7bM+˹X)kYkR*9V<[f4{~l5VڒɖURL1`<%ϴa'_o, ,N ɜ2sVJ ɲ/~)^bj_ȄU0$fAZbh+3bWE\34+<*S6H+SjZJHAyEZ،Чk[}E ][U!TAH֔e%!+V1>JILckZ)k=(B{^+Oƭoeˁap]Y*⩴QÆd&@ޢC?>jl 5G]K sło4[zpәZ~OAE.&$pc%f'bɒޱղ uU`'x Ftwpf-C"s =gsMbEJ\Ck a]&ANK@hDY£)¾f\@I"[mሬ*UqRӨ *)8`|σ`5Z!g̐8Ux?ZP0U  BoORS\Q[68a{n:*2oԿaAS)&IuJ]> LwC"r3˛ $bR >8+)~ r yJ B˨ 9om#z6InܺO DmSR<ݣiނ篞?6U_R^Ę0Om '2Kd,u֋1p@`>niÞc/`ldQB  Ũy4ְjq(!0 . 8 +<^@0Hp@*p@I˨O֨S* "\Y.)qq ,N9ȵm\~Z bKa~r9´h\*J91F:#(Qay"l[sbeJP\[ґ!rَ {gJ.09@LeO$Gr;d9P+t~glV*A"«b = 1bo OlAyoϾ')80x]Sk0Gʅ<@ ;^]a4us *njf*a3Dۮd\ط`"5x2mcZG|TlrG':P㜻1aR`ƕ56B]{C(LcNagLr3V&6HO;N oX.ў0*: ͔u0ܡt' 5x-dfѫĔb, eQ}KR[Uv\kkpFYEF4Mb(hr9>UhriS MjPq*ܡ+a˵M\XrF?3ʍϜ2sIdt;]`]QI\ӓ7mY7c-ځ=P-\y{Q,Lۻ5{d{<JF&3 3t7OX 's~EcsO#\i3?prw-㓆e5[ðs:lȀqr8s͍嫕: '~.k6SX&Cok}v}y>zPƫjqZ_O>Og|7'Wή]Wo޾>{ߧ:~vq٫~z}{븼8zy9-N}z~Z' pwvO=ahOgΤӎ/C0GQu =~ӷ{_WȞ6׳7NI#yvHBu@0xB]+:Aӎd«PF@Yq__|ua{x>Uߝ,YμD{nƝNs˻T0qN.r1ݶ'}X:]0#p읾qrWÑsmǬ>2ZpA8+]ѧ26Bp6Ӗ@EӰ5'sbyt]5G'ФTзKT=R}(h=ZG+-hXeٳZp'Iʓ}v>'!7/f ':^zW A.UTՋf4z_݁?&Ɂ46rfE?1q7:}Zۿ@H3uL9b8@8_ߙGZf5$H.di!SAX\2ZHeQ-p!,aWh^AϘvczt\Pj.tXA, - D90F#|j;m\MŹcX簿;wa>]|F 8 Ioa½H))EMBp2V~VAct[cwXU]}vZݟc\Ld\R5IR3ؙ<"ܥs+&wI[S!d#vcuM{;D6 Pfb͟o.BiX0I3o?t5YkwL6}'0p6qY kԻ–8.`v5nūO~lh3l|MoO>=26M3ԆK7 [ׯnn ÖO#ꓯ5[ʏAPm͒lݸ"_wO_ڃ{)H)Dž+AA_ߚi}eOYB0gͅ_YvwBֺyd^kb1rfN!t̞`8ѰaXDS048voƛtI[ }ÇNub⒥ܪիfg!Am@h_⎗0?9y>QI 4w}a['Ģėrg1ϑȶ9:N@C+\"jʅRᵛ8ߨ|x IjG)&=3edkH\msخ^vwO%ҬLRvgu|wQ:2t6س\}hV/lҮvvYǛ`يH ;;@;|qx_N=!m̫3w3sI|W/ع5(ݝT,I5g,I"(\77ھjS|)Vu~5&kmHlH\N%K p@[X!nQE"ZP hI#;IJq\J$B% Gr0GjGj #/Z׫Gj/枽֋q|L{USK[5fÆqkAURX8ŁM?v6l987Csre-нdhTТ+D:RHǥ$SddC';Yխ{`vlw*LVVg"R{p' @WepS]ԛSQ垍^gEC>M"w&& dYe&L`N&ۊ'~ײyߙ WS$!~έ|Ivne ~r+SjrbAԄ0Mg*_O{ Rdm 3؃Ŧw҈NuBCDR( W I,aL֗.} ZOC=OpLuKL'qt{%3Gҥ[iדq2i3b(J*"Q {e5WJYxF&Bo2g+S$N·y[pJ  {-p RcCv\ټb/6um DPv{oyԖSg7rW ϶ϹC$ž@<В*%֘k̶!a(ЃF;I_SrF#GbCpQi \. Zla_'wJfwvq^7 ?+&Ƭֹ{-䣪{6{C 5 7ݲmoS&o% 7moS’k֮Kլ/ 8;_jiWBF!GٗA;hA 8~'zF>(6+ iI{5K =G4]!=7fn8RF ;Z<Ʉ>+鐮2UO'nWUGTEWI?x~;`8yYsJ I/h_~DEV^9nrD6g?훗?n] {Zj9.^)_*pzV]Nċf Ee)}$ rqe\tq2!eЊӳ8hNU`־.Oz#Oa%%$Ϯ^E[yw87<41cUGh `e2O*kb鏳= ,D9m$فZs;?ҫ;q~̵ Ze>Lsg5鞥A\Wef=P@?"gβ%ׄTW@$c%E,Kb1;HZT bM^cri5OqMA5xtEJL!%HŹfS_We*+UY!W6~?:,wݸ|vU8\c Rr*n*qkU[JZ5ֆk]#:qMhmƵIY]EtU%*]U誦#pk T'I 7COoЫ8U%Ʃ*1NtS1*h!'brys[N|G$귚(wx%Mijv(҉(y]`l9[®pmH )-WP$ ;Ui$M2*"D~>yQՖlۇPܝ;i R{/[/ʤ,-2?{Y@9lD ZNM/gmTץ6g2:ܬ:[Gj6B){S%W[?2|=]\z)|r]e]U|{m$PܟV&n*ޕZ=Q"$i,¤Jm%%uޒq^o}`V֭O"[7OȘp+PxWW1萙IfƂԫB IaE m t.r7@s%pv{0_>hxrw6S*4kQqE߸gdNtXM7m˼2hDTZjc~]J9xTq׶sn;M ]v="kk;t'%5ZmXݦJmJXJ-*(xn@>@ѠIt{&s"aڡKwb>PfA0e1&y1e ~GG ѻ('r 29e$>UNX1876gՉc (}<$ci/ׇa)n&]P&"|'5-UPLM44GI"Z4pE'i1򘿍ě6o6sߎ/b:`]X`79Q];5;q# ۨ B xC #2 X$d:*.8dYmb4RLj{) Ʃv"1y{s6 [ ɮOWKdLv)DKMglV 4d1Gr6]HFSt.Ӝ{zJyibz.eJ AJD5ѕɥmAac\C')dQW&NqD@cIѨD#*tMDM'@Rщs?ݍÌi}"dL(#zs|2Qz'E9}y%>vsAkVzbHǼPwjPV~+&i&lm*,V9%rkV$# ZgP/)9ũFn,i+3JA-9J1N饻xRcB/%9uD%^n% ^\ӋR^6h^ܒt}MTދ*Bljũ$-Y@yQ82 VA #KL!FBKԌ7Ē|agݬf8jPʹeha%517LY`ZnWP3Y^ wU=wT%TlκT3Xd%^4"sPbeii| t7ݥP>PM̖n9+Ik: lM9säz4퉖(Q C/XJ!ЭԛbC#yZeUCI_ A4Ò8T~nô=3imEA)WsT>]|3V\$|@h<&8I\;縋GINH'tj:d=r{-"CWJTM*(/y7G=JR?޼&n?Bߖh,nC5m/Sw9zL'PKHtSJ"G,a:Mtl@sA< ?|N][ЌFW$qp$53ʋW">uɔ%КK84%2N3z,p- X"OReAD H). 2$;3jZO'Ob[ ~[ ZQb kK )u+M.`,`MlC._Ml[^j CX^MN 5"WȔ|=מySk[ Ӫ|W].ܮW&=-c_~|󏊗jR+=ۍYgv6|-Pn V.?OM\bMSh> }Hf, lUTXY%J^o#3[qB m誹 *PQohKZZy %OSZhXh*?N-p_7C;?GCaOv %4Lj晪 iqܗ,z2yT̔ZB!C6w#2)P Dq=)^;uk.q. .@+4 _{y?*ߟ|>`K=|vq VR0A1@îf sB!][K_4~KUhI˺$ [Ь6PY`'S]ì[+|35wF&Vx.#[l~;%NՖ"ʶY!TC LdHk߲mR=֟*Rɯg,uìzrl]u.O'f.L\:vӋ &`Ia,;Y*EG[_StQ^SAl*| Vȸ0W[,<PW e"_?}O;3[q*Y9pzK:FK׼nU[/~?o2oZ.^HCj|_Hiswp}~KXپJdshvTZIܜY7[qiV2!s:pr ڹps hrC&_䖢.~(=d?~֢Q(ŏŏIRH\hR YRg?[ 'LSRAUf Ngk^R@t'ʞOƒhNG\;:7o2AFѝoO޸k?gٙaӯ[6=BMBġ)Onܥ >c~I&)dxC3!Sx 8Ѭ=#{ .Y:"/73m{߿T38I%"!_bgy[(cb]T3qd&ռCdpjN 5'43T&f{'Id2"Gm։ɞ\?+*@2<+LǷT R^m:3IV^=2Gp8pwx j9TUcT8 A0角tYaװ`zTKG-EO ;5qȢxOSĚ-8~Fi"~-M([ΫKJ?`}{BWbNaGNgV'q'ݴJ%e'N|oiЃmLJ Af"|O.2n^t8#H_jh؝G^S.+\rPB2'+~?b+$TrzREFRЃ N]ϰB[>yN;P-x h/iVnbre;h"ܐ*y=QVFn)ަD6GQMAgJ[tF]$N#6(!؆;w,4d=^E|W.;\ +Vʠ7n(qː߉M(G b宄s WZ DĚxx)N8+XH_M&cڱFD% Ȉ#W?wsI:gÑ}9.a2DN0}@Z߆FMw׮/`hk|=JK47x}_ wo8z1*BD8A(IDq 2Si9U6Mb) 2\,PN2NBeRM2iù;^:Yռas,RwsC11$Ͻߜ]3|loٛxk Fi!E%HSEtA PvApg8eR KB KPq_Tc;XRBu H͘|-Q*A. =pH!U1.Xd8,fR$^i46- XfI!pX, ṇzL!* D '~ WIT|ₚHdTZaL^(D5beM zhU46֠(Vcf I2V5hFwqjPg* V|0hPc~!WBGt]t{:l8&Qx1N#Toi:^1zd(Eܡ%T ,ؤ}H}Um"FG8zNi"RِÑMp2ƌJ[uYoIgpGV'Gp%&Z_[Aoyq(%GP1LH;T;3aE({KmvSwoeT{FbsU2Wt?L51ZO~2Y%zWo\Vb_Uw/d^Uk{{-m2VGSTj=kCZΓ=*ya,c=d_fˣ*Z+\[USPk {%E_~<_R.g;=XM2#c<ҟ\ЫɊ$Scw7|>mK3dy,.2KAG>aŏr(f,3G>\3?Y~9|NC,ƷéGrx3`~.3zpoLr6}MHBn'=rP=+9$Y}CV"Ed]M !\y=5ICjm/;#>w139ku33wwJ<2.VȘKzwJ|TOAZ{s c/ZM!*NvFRyG`9CӵDOgOw->sԘb; Z<0DMѫ&=TLǮ@l N֟~ljX7W:=oN d˕nUAL-$PI۩V9W_0rM0(}xK^҇WUaŒ%r&TRS9ıD:J8q,t6,aHp/UacH`eKM)ۥYm cu\3J̼3A@v*ȉ_ \ͧt$+d ^PE*o'qP~v8W_|O1ܫf\_}᷒.>3y͛|X&2BW%cPnz6v:^Q!j'0e9?$QX] $Fz$.SKE5.@("tіw,lm(׬kȖi^}7w[]4` pVWì) $`9tBH2oDEY,"CI-0q "MYjI$J34h^81Dٔ+WǬǍI%2JTTFsgDC .e#_`>*_=efFSh$OPD,S'T*16N[äCX5{Lڎ Pdt6PyLЕlP =Ae`˯ ,A$01CRTi#b23M N9jd,E$ß8+2od(N<֬[oII| d:nn4 ?sOɡch{7ц2 ovᒈl_C蘏0Ɣ,ot ё\j%Shg_kWOq̪8-8/8"Pa.?yC g-_88b!-,9k]v҉$,(pblr`̹bFraĄz0L3p&T^1ۉB[pRN Fם \g|G9LV{P i( RqEHUg1!׆?{Wȍ_ru|fl[{{wJ._rl'˙?%[meݒQRr  a:2)yqj e֔qRr\=媾ԫԜ'LR:sA@1t\XVcҸ*/T{+3h#>/w"ǪܽɺvuVuObPS:%sŹ}/ vN8^=U@.r{]րIyEE[c^`b|X(R/T  /]wtPqS q_]o/Ro=G~{юh&=“ɂ^}),F0X^”RZ#p$͖z%˽i6’3q:t%'BT-YQ-s)R3ݐֶ ՠ;Ъs8 .p9 n0@+I49p Zd-XvsL`_,w8 2ZpJ6w5 V14-N$j3APV'6n)H S2UGMm|4Kˠ9#l ?yFsvRErڋ*al R8EXԪ7i,B>-na&69K0T2_JI`єH022wgۛFROc V "H+6^ ,/FM G2L\L^8h\33tCF c ԕh6L2f{Kd%WoR0bmf/S@6۽ųbv#de($.!Dp[@2_VSq )`:y|gs~!?3AWjGU'j5-X8$^dښ1Ag Cpe`5JtYݵ!G$m(X|A;n&kyMה:kA !yK _8)m'hxTROXdຐܚ}Hvڊ0;* C 0ŀ84LR VA`"V0FqYH(2|>s߫m57, /aZ gm/䲻ũF'NTK㤅$᧖[E>~|XF=mAWKvHR%! W͡]6IKBrq8IT'{ ٷjC( VF )X&+kbdP0]ڨ Ǩ"/F0y!:FBQjKr[S#X zE |-Q!Y i(\j &H#ҕ5傱RXcsv"ײF5ZԼUX* Zq&0BHYC@guiy,LTNni;!X=aʬo{0"@.{QD /1O}OL}N ; xV^ǧKƈm~E!HӄIJˑG354ǒ :mN 4fawʩ(C 4LA'i&`[ȍ<ܟVeQ~wVY8& Il؜W"ZimP(v%*ZvXN VڠIB $ͫrQZPf1%蔑T?,9jՅ`?yRPC`6)GMVD/ӊcJJbr]j4H&fodi@l(|IwȕfkQ(Z/G6IA*ydވ54Kkl-r{7sEt,&aVw}o?}^]M47(oɣdYqRNǿ[2f.Ggy;#?Wwq phIs0S߅lmEz9ˣ7>FŔ< xlXWFݎw`5L/ lV `R `BvHy3؀= TgX>!<.\d.&4`mA(W:{ڛLPϏ=gi~WO'hi2p#rCLk;esmg([-6=y)5fhE}HG΁H-utfG?{r.o#MͿmjTL2Ob,ZwkfvT?Ok`y]m-׎Pߛ۞Qcdo?vSr==Ä% 0 ($`c/x&'Wɟ >lsnAfl96 C:ㇻ"`7 !v߁|Z̶RZtSWq5 @3W2vh.CH}[Q8VE0c({1)ʄw<53za``[$0p{,#b]PB2W&]Y?r^)BJ[sBJ''Đ ~أxMf%YA1= #Z;xԙ$R&9myށj6n?| ~s;~|fz'ӁqI =siWVu}sS5#bwbk;!,?qx;x D_qxw1J6^Vٖ#Y5fmdLF^?ެiſqZ]?Г$eu@ˢo/ i= I1)^V)=bz|$R]Gˍ#zI6mQh+`o?^iJIy3C2@sݙGW7Fl6r8Nt> ؒq_/k3}5Y=HΣI"ի_IfkL2]?ܑF;fLl@U=_[ݬ=wp)pIQpqJp~01Du'#z?;8=s`>zZyeDNa])/ q43,YE`H^kٕQ/ס! JJsbQƖ)ǥF4:HIоp YO}vج)QvD.NN}0mOY:qW5ز&}{uL/k m~ǿi"[[^y?fvXh=}B9x93N t2玃i̠dKt6ur#Yeבn$8OU'S-?:bH) 9IQے[!1%EcQvEi IQ ZX"Zilxrh H.6dVc@@V3/JANR5lP͗bnIDFT?+%X66M 61hiZ sQRղ%lZJڳQ@Y!A .`R*3R瀵?}&SIFAw.m>8~ja)\[еcbz3wAݣ8 ZZSEN%' k.uNBAu ?MzV ."1^GX'}LŪ}KE*hJ(դ_ZÐγ9Cb0[4ЇۺNh@/w)'ZL {?e!fPs"azPD;Drl:ЂJ s"?Bj+oyY6kV;tyMcAtl_Xِum'!7]h8y0Q0ڂ=@@_!˵p`{V|yWGYoؠ/v"dյC94GloʠVVs1c Ȳ &Z7  jy.x%\LpC@l4egd#8-lZ^M/kc6F!o !o=f2k!Qk:RV=W}SFPxm!}k_{lt`?q{d+gښ_Qeڸ_֧*5㜗M%5lR*:#[$;n忟$ԕ ARTflQDFt6C57.hy{iO7pbTu5!Eb=la{,7(

`m;(ꋲfA`pE"\>n@4" dлC;?,kdˉVm8g,CqK3™͹P9R'#K&G^[%aXM ъIz5kװhJ)F>3j/W4IyRTf¯Ψ#yt:`MM%@PF̪ 9;5֠\1Zs(]Iuf)G>Ŕ;!-4!=8X !%*.Lx K<6NP*W ™`8%9sD90 KBE~ul C԰\8 ((_@, )X"QTÌ@N;iB"qTq&pcTq+'ց 'FɥnƠw0RbOs'p9Sa@nscEƸ$1 j!ʋ{eiYu |F%9JSm[yEBNRqb* n9,1`1p@e+pj 9͝{rA|2mc0>Zs ܂oa4qop-)y Oeia-) 5 K*uZ L MqB# |1vf07C* J>]^V[==k<?5^Ѓ E;Gwƻ{x `~pd:Xx!선:0)̪PR#!k TJ1@yjG2tTS-O4Gc̛ţؐA˜ rF֭G@ p _ltŧW}[v?3_;6 |᫮ZvJ+b>x~})|:0nGE62> 0$X^K{N9}?]MsP&Kb u+vV cÖԑeJW4!GCȉĺԵDrKTêQ_]B1G(בLG{p;*+KU@t ]dհV;?F]VnACwZVaeW>vdy>Ew}}T'#TkVNR,Te6lz=cs C:zht94 y,IZr<'6j@(;,V\z-yGyCA5 vۄDDmRlmu zV2N89T6VI 4K6~lZ7zv1DhYFl]5lU£g39[~v.upup+xtx&թniڲ&OgWPK6zvTYfۘ{OwMơGq~z.˧;&Th3u]>d\ Gostt[螣 biqp:Gj޼tzUpJ;WI#}QPƭ7xv('dR ۈ+- f㰜{#7?$1fJmo:@mZA8,pb}Vfk͗td`N*\ .k >g7lRyqus[`C%DzM*A36TAVB'K?|j:W/s6W (%Ig D@IbM/O[lSܥZĚ$=v2b':3-?=qn#P_ekI!l3i1#AԞvD?ooGbV;滲SBgCnp&Zln _j( K fu?=>ta tuyʠMa =}ϝVRK4mLQ\qհ@FY nЩb㸥<')%jie %F`R, k\DM\RTG>M>c͎csܾ {|zL%Xf|#h|s檽Vnnz+=ئaghmbMm}UvzAX] kU{edjGA`D!z|d,M;c9om]ö wawVwuC^fpɮ ޔ'{d&VZo/.hkr[nY?Ǎ:g8}l.qiY2UDEhACKMB;%߿^~ 1 P2jI') 7~rM0nGiVi1>bh4&GIʹ Jcm=W]nQXH~^8gjѳۖ $_^ I"ښP/+ǛޔCN&=%ku$ hԵAn_$U"XFdOVVfNte?*Z[% \^U~̃ڋD_)ǔ4-+]/ɇЫ<^+{T5'֍8yt?߷iԧ\#JywdDZ'J,qeF9Pƹm䐑vmq-h$=eK~ggLD64VJut̝!u1D 3/jlYk 䊺D3ß'џ?@ϛ ]j Q $NO/hޮR{4XOr6YcOY^^^Ei\4$ɐH%BH%ђxJZaR֩Ti-u45dFQg? ߣ?nɢx~ـ!vxK?h 1[Wv0ʢߐ(a }1CFUHG4.*"2M0˹0(JBf859KQBlfDjsjTcf|j^Vl_7m]r|A+: E}3@goa8yWarBnӯXXBT1k%(JTJ9Ͷz\ OLq8G8|A2XSq^xl$lo>qٻ-}׫/T 곐*JlK‡3,<> &Ӈſzpύ&^-ɬc:) ˍHRD3A.4^_\=x^* W/"&K5 ZIUZ2<͒'Uќђ pCCs)- RNO>1[Z.) 3Oy9\bMxy%s=(v20Ta;E09XI%* åڛcRfQv- ۀ!2. F je&F%f\=į]!fQ";W7:J,ZݮKK˛{C%"f948A JAޕ5q##$ YÞ} N#Hyc&l]FuUaL Q/L (.8ddȅDYFk!sTZ"z!h f< DH'O_lbOWO_tuKb G^U8;1ܿVDw߳V;U1 ˬ߻ۏ\mZ>:C~jqFp9.]~UŬ 6/scǘY1@~[m%/O0\[`'n % Vf$Mt,TI$.QQtVpJ" VQS-@븏!UjuH.dB@oIc55EIFhJu[et.D5WG@(upBTJȳm&ޥoL0٪bm"F} _nlHk 7JrU]ik"˽O k֌yQ0]z^Wak`vGkl-)gq:B2r\ !4v2"F;*3iGNUHG "Ē$0&<U0=  \+[W*+Is*AVe=4)g7LR0zɅ4n]"u޼bͣXt?urA҅r" OϨ_xFH"W3z}yRnn._|ɝ|N;? 'Ox5/L]*eIo97szF`1sڲdϱP*H ! 'vޱ2#+3%kBM nw6 L˧%h'g]G31h{,wy6ךy4cV? V,gKquч72&$r^k̤n1#cS;u>* ֝#Rꭅ <!;9oqWF%E1EzxU,bE >̻"όC!lNx6Nxf:[vv§r;?8uk~ϰ^@|R b:zJk]Ru2Ƥc.dN4T7A t|cɵlgi2NQìUK}?I_姿O@keiIݔ,a*^2P ,^r<ޟk`q5&N{%l0HJ+sIV#mJ; 09 c cdߐ(K%`j3M7Ya W30#8#ۿ5S>C0YO lQ}`*Mgz"dZW9=FoI ]nkmǘ(6_l,oJIn\w\V4 ]]E^MTX#wDx6F@"k 2Jf9{RjArzF<`|Ȕ1\ < 3%١x]eAd3!a֑1|E$M9!TԑĜe( K +BZhV}9*ciMf8rDy 'eDd"e#,s0D=(88ɰ)T4AdGi;.l%W09zT9[>GW KEqLG"̒9dc1L'X>qx"+QUL^dMds YYZJeV(+=0}p]e殳BLl"'h frIKdB1Ⱥ\Xr `KE 2ҫ\;+Ä52Q, GJ2.N%/-=4dbH~ d*&9Ra\V+bv}c])]j,T2`RyAJ\kH37;KM"X.qٺ(y*+ )7N+姷WF@C8^2 | 7jY2՞fp`;BkcwϮ1q -Ki@swroj+-9gpIVbi^˛iW^fvY^F,F2+\?3^\r_1)NrWvFl6_Ֆ MzurU&Bɝ[|~${gL>O%~( ~~M7''k~(F`D]GWgTgAf(olBOS|1c Ao"$[U~b@O:1)0Dbe-?}3˓|%y$狶Tj*i062z];+'"{u盻lg+^GJzShkTA7䘩 22\J:lX8sصJԼ.4N1o5n)qu]SWš8xFYcLO݆è AF]OЮ M #N:ЏfpLǤ;G|f(Le㨷AG%!Gݘ#G45\;͹r8:}Ƚ?J?x}> ?ZlZzx[΃sv=cJ|Ś؜hx q] %3BLZhjݸuuݼsR3Bfz|P? i&Ɠԟ1}L5hez5xx>CP'Q 8; rG]/!EN647~?]@gwCstot-k^bsm9[TeyCd|F! =BRgwzxU."r =غGr}[gf7#{Q,=v~4{F它mLzhVB Mz~A:qs+&7?py[&Z뭃Z;&NJ::XI- NxȞ2xg/ղ]ٻi)!8=בoTFA,vIju ԼSnjn?Xw~q!#x҇ ! ׳Jӭxǡ;@}TsnF/WZEmta1W9$c;<仚B?onqJ%OǍxa։W}ev܎&0Qj}3ȵr1CW3Je.Ij/u+9cINtcu֍c1_+8Z5o\o=vxz_^@8ǼhDBK*rK:ebX{۶R!RU-՞Á+b C[⁲T&S? ?}_F +N>o7y=E>-\[3 7'Zi)Jq3 ˙{6FdG\:ȅ̗MZ>)팅tf¢mV:$kDpdmbBJo+ZԼsS]sf- C\Cek %ݑE.qZ^(0Qki\8 @oO ޵5q#K*Kw4T-I6qKXD{RgHIÛ!fXKD̠u"k= ߖ2|NIzYV̔e~1P}׾Ej^;ΩQ9KF 8nOu'ǃDɌZ֡xpΈVIވp/k}/F6^-ƐFOysr.'?&bwLi'oc8_Qz. v< T{k/<+ЮW4:.64'YՖ`r{J!|䏮KVWC_zk)ˬ٣rAUs+_/q/ dtJ$ +6ƭa4NtH`^K c0ԁ( ¬Wʓxd.Rm9Zwx}9ɂ`_ŅCģ6F dOMlȂN?+/%ГkQh0#{QD28#hJsAqA't̄BR7fm s@ecQX6șv;uʪͭSKsͪMѽFw2PRiyؗò-qNhJ;I+ӇECRzEӗ4^R/Ԛh0IԀlTubT<AA|&Iㆳs/Q%y}-xثHUqlyxry\sex%yKz*W2RFم:mOy~FFH*;->_L}q| b'c.L:)NDz"P+]) #dZp *DiZ6•:uJTguu )r;:J=q }Ū7/UXh;?X՚<`gAǛ߾mz2 {6uYQyA?ȟAgUjQ&ݠ fZ\B:B2bzLR?mZ,ƸPI8g"jK"x:Xa;{3SCW,IG ޷||{g:_n/3{x2=7[\sgϞ5qX35?k5b}0n0̀S"$VwH$ vw b'eI vR&wIER'ᎄ"I*$PWֻPO (k :sWf|.6]xY[EBD吣9G:N4.LOE#ھY猴ˊ8T'/z'°.5qZͤ5ٌ̋Hc' G(N"Jr-x5KՀfPZR{3#t}+iRJ&eI=WR;,N~'`B j@$ PX'V !1CAwnGY޹,\y) Ty!"7(l  *Z x`؛ jo"W!RJY=`| je|ȉ̅R*O/.eb}YGʹ1s of`Mg^ecۻjhd -߽J_OC89q^Mfglcͭ[iR2eByiIT0UPuw^ a`@Pe9ea"ÇHlTY s8sԤ."'ѝPC}\PP*x.U *kP+ќ/P\ "0" F PuY X$Jl _{mͷܨ,ܘ'sn{wȥgJh0F u(<8AhauaF+z*9S6d! hY5\s>8`u2=_t3)3Z^MhR< ՙ(|Z557H+8yEN>-?޾`^u#4˛fǘ`/~ .Pbňw*CbTGU\– ?]]a]4꜇G1ZR) Ur];`N҃F _ilx"VQqTk@ԠIF@e(c0ݣq;%&MB2ix8I<)'e#j L.$z`(IP|pqG##K}D3;`*{N L#E J*Env|9J1ǫˮ܅S^U<\Kk( dUg 7xIBweʮa=?}[^TżKU1PuV}ft$1dȵqIoz TCy:"?D;{sn$Q"Ζ~ɡsex%_ڤcw>L@䯣۾ًO2 ]s4]m{."=RqxFeF9Q1*yWGfQ<p -zלq%v|>EvlM{Ҫ߶ddN b]!.ej__CѬ2/i{>4Snc{u !NNC|qԳ`d-N_aJ BBDs!ZE-$QSZX[ҥt%HLG A$tdʺ S~Of#GFg㶷ˡ VP1g !C eid^Qϊ*\Y~@iaY>+)|JxYF{/PeeIx+xDo;(F3q#\ZXतń+()Q b{i RogIkCqX$8hݾ<龴.II=%IـHB=WyQ*Sǁ0'Ά7lR>ڭGZ6Ni ]uAW {DaZ>^^{۞Tk`}Fbc܂y4ǑZɈnXd0VD,*cKQP/Zwhd~ӝs#1d>?9Aek hX ʆ6&r*Ziƭ=DhEP`Pck$ͮϽ@Vn؉y!Ӱ&(*Z<1ZJ֔ ˯ӁDGB|# U C(ӅIpY Ln4pO b]qhE,gkxbA_@;  `Vջa4m3pK}\Z u R#sj˙DMu'J D}IV<Zf#k2Ω]_VaF㞴';)Of7~UG^/*wE"WQsl ~:9 r\~UcIy9:Ϋ_/@Z]2I(U҉RAoʟs#m}L*J益S܇&~+oZ\Ddv;:T@B--v;: қvhvkCB>s͐)l2˸='B|oݿE&ZfwyDZ' # ,]>t=Q,d)ۯ Ky&|;L Д3[E%CVc?͆+o~+o{CH6)dG1+{gn2 La ѺKn3\G| vm$iHr*DA9_ 2Nhu!6iRGCM!2 +wZ@ARF=YKek 4PBc׶69-HCi-)x4*b̌͘4̞[ mwfϒ*p/}p1V@8Q2.)\5JƸbw f6E&=SI6d+qQ:|}my‡Br=XUQΚuխW_)ILV}o'P,c#5 uBf.cpo'!t*#٣3 8X'[]i,B>Jmoۆ!۶Ld.cp G\*>8`;]~2FVf}0^ˉM~8zI 0k{8vz;|,Fn&rbN>rx~oR{ن@}Rvx y@-ǝXEwp)-=}Ais!ZCѻtS6!=}Ai3! P$_ hCz[)tNR{Wru"$dONq:IZ/!.Lr )׀Nk 4'$3PMW"IhcȐ\O25%zڐǼ0-ÊIB#ƒ0-;ߊIBU 2Y/w¤f!p 0-ꊅIBE8< p09 cٺ0rԈ8zQA:Gֽ98 G1Td.[MQ8jh?AMU dBkGbKY(M8j!p^5G#3|-$Kwe}p#N:#BrMQg2M?z=:#Br]p~Ѹ>#dP hu$o67@Gfc1&PWPf= F+2ꄠV+p^R5@emMՔ%wDv=7͐BRΕڈm9>)ؕၡi5RtE)jg;fq kܓa;?Z|9}B2EK%뀜2 HYZq+9 (U)$0THRer* >Qԯm4'VLP*qf..xqAǫdSu"0W^=2p8;ė8I{;v8Ћbq[J߹Zj|ղҊttS]hۅAc 7a?ަoǀ!7 zt9+eaѱ"LjzE$+BH(#َ8rGǸD]R8pxwmV ac5٭HO -J7,pj!j,C|٫WZYe '-PrH?[|B-噖wl(g(<\/ݨu~H9d]\`plMFg`[D:WU*$.1EƽƱq{!u@-~󶊍PL&1bi;~RYL͋a|ܣ=πr_.6g/wIn/F,fz~^2J,"+7odze7ar|"4g{zWꭊEs+&+~ rR9g9],YXΙX& c/[MWO7aN^R`a̹,0E0L#!0 OS;lw4 ?[d!/F\:቎:(D`G KC쟺opb_aB".22q3d b}"ߴgG]&U[M(c= sL\R,RW210[-`J o&>"TH*G}Ċ#SNN3\<(X¤&[yn*<PX{Ld0=<ƴT.kK3w+/pXrCI~Za}-AqId!x3!-Pq) D~pgDbO ޯl`o{\]a'6UYcץpN2<XN|/??_v$E"&U("Esose{, <`&r|\2K->، SUnQk;^VF SQT7AeyPXVn+*D2gGqXf+k0B]"/z~VcJI ,D$#`s,ԇ*32oNJDy>%|ZK+a!Gǰi9]dk("H+1GB_.3+kx'^~?TYzю$ !MQu9ƹz̞̕r)>V,{,,ȓD檽&8q V#EQn],^&)mPQv݋ D|Z?2sȧ $ i@zm!gyMMjD}5K"'1MDd#8#\3 w*HrؼǧvwH99# 06e^uNc: m~ɞ9-EO./Fu5ݟ]< Ž#; ?#sKΑ!-or]n~oݡ-Zuuül\+:QHp2}lsV{0P,jIb3r>jnpeP7ه2s>qQ!q[-9vZ%CќD~_{[xډU1"^(޹-~TvRoR\np ҉[’D ,!a iMBpp<+rfxxbgіwdEW}I +7WH>DBDHqu@8A6X$Hk]pپcGO0ǎK q~CՅb VvN"[ؠ(QdyGGwx[{+ZQ\5+rs$Ġν5C|-";!#Bry/fzvgݻߝt} %! C.20LHGDcIjbKWl'!NFPƩ_A+waE 'z yKjTϊUj"r:-K&r`/f$_~ݏ+n0@rSzwtws{cS.7:ͧ3ז68̯{A#*L u0ŊrcG5`CB Y: 24(d!mgx/(n\'(Lpn63-a%fH,F"4Ӏ$ ͥ0UWd]:hʞ  q>ֱ%FfB3oK%9"a9C\U(sl]u@q֖ܱ>"'`T>+e[:Bʹ' VSC*/E2ij5V )0~'e\tsН-\sy0SYN_3bd#P@*!0Bl^QB# +\;iF8WVyGG| ]ßV&V nk/D_xĨB5* eK]gd`ދZ-3gLIA|8M\NI)_WB-"5$0bQ,Fl$f~GI}@:@F TD* H4H-9dY%/cH`!`A YNc"t`dt.\^d\[f:Y㔔Ͼvjx24p>;B&RsDS=ДBNͥVk;jj4uy(T6{hDMNmɑwt~pwuY+c{5PX[w;W}C\`>/0 vPuZBwi«Z_4CZVoVݥ_k㨘~?y9+0bmCTUU!V78r`ki?%)\s`8Iw-:$["GurAjWeV{Kӯ`yYzb 7  3`䤠jnݵT-umYKg{R*/AV pw:ϊp*>\Dͧ9W"Q+nJȀnCDI*(I1I!{q60ҕ9H(JK:l(!C]*!wmk*|!i =PPgp!Z_!C߷ Rlu/ m`6=CVBmf_tF>]CptyF}{:NӷdwS#1@LC.A<1tu`.?au!4 1ISeIr>DNb mV韓4Mh^9'9iqpJ_P`j+Ʉ8gjk"Qg~ yny[% -yuIΝ߲{n! !R0Mei6ou>$pxUK1RRR "p*(EQ^+Y&%`D2i PIv~ȔCa$OE)V,A\v?T)'5{k?dęRA_NveT /n#w~~w1zan&O˾Ą AoGy1;!CLI~ '1xSoìdsJƓx Ja$y\*@AM#|`Ol"G?d9Fmd1{͂L sՀ6wJ\2QDc{[B aT(/W%wߍlNq˕-b'P[=iQIEHÐ L>,*Iƪߛh{B2!@raN4A@FTqCqMBpi [ϛiXCv=X1wg\{00xW$Jx)0IWI깊$*4h@gIaU9G|j{ 1LgYLͻaA? FyAD1UV.zT9PNqsLDAdm4djai:]Ͽ[8ZώK3,'0xƉe33,~0b ,3L#!0 O㙡 ршI IAC8aUpjB[baB2FlO4TLY FmO~(}s/}qTOa"}z "+bm2_w?v;#=jmn7?24f^4Jt U,n~-)vmrSmd\,dABXP&R<Ҙ0 s=3Z2L' N}fg7[BxoΨAp@LB%,Q d_$c;/sRz;Ex%~AԪa,$5\ =H-jV7X[USu8dMwSO !&#s9xs'`\FSMQ°`8_ljWXwgIʂl}}?bV[{: V!KTZ 14* -fFq/-@.Als!z[U䞔rrrrUW9ȕ65 - -FA?b؇撎, !ӺcM9c,dPZ P8{%AG ;Ad::ウb[;K\ t-/܂Fx4bLtW +D;xN<(MH>zk1J)۸!#TSmNTfNU\j - &BC1py_Q"5;|:yﯩ8>|KosSo[D` kP($Q(51HaFPɄQ)Fi'N00X!,SIQpgԁkQ(_2NT& 'KB3g`B[׼_T%c у0Sx{~Q(%Q(%Gu9]8i"g:bC$lgh,*QIbw#"U(HNN"NaIGBuy;N~4zw9]&gڝR Ep Bz[9ГYsȿz 9X6eVEG)J ֶU&PmVee0!!L/~ fϻ)=HU@|ܒS#Kb&ȸ|y߽< y' 80Ӥo'·si]|z~2>EBcZ(RZ(iuE4#H:L,"kι%!2#=uDǕ/ySRvK'*rK'/LSvbxշغ|W7?n2Sz131w?>8=k%ֳ$)DM0Ib<&ϋFx8 5;QVT1Q@T[郔Vtl柯_fl/LR2gǹ ^^h&6|hq,@U,OU̎*fPkn.I=z>,3V|2M8bQhكӲcڙT=Z3"v8ƅF`:9CpV Q/ zGr%: w/]KˉK%`,lk-KӪ~UB׫^*W Z]B>ZJ>܏Rѻ;u,ը[;ҔZ6=>=K߾S=C-~|jI3RL csHa/T_ "B|-9AOgGn}z]_ ӌB;G@zhKB!eonL1}̏c>aL{TÂYGVɎ4*@m n St UN (rS+d;ID VhJpѶ|[ߩFr"'Q2`QsE $&*h1=儮/#=L4cD ϩSzt1 :V(RG(COw~ NcXp`{% MS( [d4b . `AGOw 8U#7ɗtd:m= ~b74ji'P0m4z -y*{nn%mh`eMS| ϰw]Ţjܥؘ缺R2ݫ^L*eWL-Q^X5î-ۢ"]Bm O|3֭Ҋ.?nxq}L`T$uӶ>5Lj#TPŕT ƃwaIJ+GmQWZ2¢.,oInEp %GhЉڝC8%_i|$>rA(a}2C4>D bZ*zERQpFWԨ++5(t읏J KAX٨b$2^[$ RrF0b҉K'{s} -RiY2Is!rRz[ &6:4_ZTDD6FQMJ1&_7y ~_bc[VLrjZV k,9 K? >~Nl6KxG[Z7P<~O9ۀ 7=+I{߶uO6X9P`g6HɮIm& iyhkf)l't3^hey0AĎe$*s͸f30͸=ϕNsl>}3nqn:&./]ۧū]QTh:G?~ b35]8Ɓ a"(O"ăw z,NFx 82c>ꚕdIF/#6rDvjڳBjD5¼^@0$@4pB]f#18+8jHtp@HIPbE8\3sH,< +ނ,`׃u,jE6xѳ>&)$j t}󻵫W"g&( -s2䘈{te.闛,c0Zk~:{SUNg=]o98\%Lvf2jֆQ7G[ٸٷ:.-g :i~P߲y+p08 RHL$bTJ#؆obR퀥e$HG>!Ed\6]Ǭoq,j$"""X9"3ZB`сA( 뺭{Ue;:c: WtYb['P/2GWު6ФO"&,2nyش K$Ea)!&_g? Х_ d&ztx9ta:IG hdk0 $'\e<c^V)[10i'!6 3WõTdq`sj8̪d]O+*+Ll $.zt}85K'춈>"?ؤ{WCd5f\]>fW\Jͣu[!^;fy3x`-Poݽm L&|vLL[v\[10vJeOKm|y uʃFRy#໫z΅( ݳaƫl,ɐ #lrȷQ;=3ዪN=xT"a-2edR%7pZcJN,mNBT'7b%5=7f8>؂7f%&n@dIE Tu.sgTS |c0:n[oݗ!.:yv#.>uHC 6+\`N# g63kZ-kC"WdQN VdzD/Yv\'J&@"y+% xt1 {jCoiBY9eȸJ Bi&Yc,x#ږ=&P$ϼnDj  Fr4Yo \6keG/@wJKRl?Dz.Z*R;D=]+[5kqVxS;EWljDjecoUVU- ͪO|/`AsLjBsRZz1E8zNf~ BuйbX[S‡)"``,/ȥ.;5*>R^f0ڐ(t8  ;C> {ylU0`I )vY#uF(QaXâV2&Z?hΩ ^;T5_gs+;g ?09z( `]&oi2,Y5{U r}{y PtVܽng]cq*VɐbݐfH*X_PP|z!1o~kòxe20^ .}1&|\ > |9M>:q|y:^<7\5ǃ7qVxT9X|fd|3 N0—wmJsξQ%Mmi$."5r^I $%Y> 4DIy̐3}'Pթ!솶Z ZD$[Qpt{T2{'6vQk0#")U$Y 1]#e^;qG,gP,)/sH *)?lҰNy8*S_/vfLdDq.8aW^%Rd6_Q54 ӐC(%۫؍U=/@AL軻\aF G9@Yİic yz$-7e2iCT*߼xy5?ɣs1?ذ~iLY\+ (!+B!cnȂ2u4aD (c]PR'dӷR⪘nBFYȐr~1g7|hݞ'~&=ofwyo2{^!3>~jz]5n CKf3Vg3y2; )g3V.umOZro[\R %48+H9"G%%)ץ\Q\U.W}ZK$|0i{;4 `/?f%Ӯr t;GW~=%.yk޼^Koϔ.Di/^ַ(]QV>0_;??vk'vx!1%.J9p+cy:LX'@xp1Bj'*N|:&moA9v'Pj'>'^;N+g̋]vJ}rߕ(O?T~~Ցǀ# TGRE:\ _?O_~|~d< \AMrr㑠 ""K@!,jcؕ93gԋ7izû &laci"⍥=s^:B0ӟ(-屁T.}HԘhNe@AE-GkHx1RoInr_j{riZAaݚH8=i`a&FcYMP@YA@-UgHy.0,URRܲF]|vL(d3K8e[PaFdYZJ*'+:K61wTXG1[vsL. 1>QHi6 ۝mUݍOߴ(x <4mS5+1(pH+[^tщVEXшnدp|t/^R\ќB4<_Bbוpġ>ƫE!`p,0RZܬDoIґ\"C]Lpp ćkhӗL*5.Y Bq[l['l7OZmtz6 U=YވI }B BUkPprJs+T `"ʟ+R'mAKƲ.}k^n5*}{˞1*NR6y8nKnZ3Yiuշ SPJހ/56=Ɣ`cZ7dA&x"- ".Wڮ O4y$6(T'qm}I*$A@ dU݈ ؕYͤ99+@sdsBH*[Kj>ªƎ'L4Ɓˡ nT 뉇@>| `n '@B0 Z36Q ]C =䔚bO쌜҄<1UJ!E t9e^; tYi5VnܴGqߙ@ӫ7aaHه@>{"l O(v$ P> Bd1D{K>=@Ϸ$r\9#NWTh[YI,s ~7`8| 86/2j(#7Tc`o/<*pm_%,ϣ"#=M<ɃdnVT¬J5RfA[]t;=廴IP O,"N,zRzBvD?zcS:6+)--g'V३w~xG- #"SF^Neq: "ng[e $E^^+SAid,Q)@TW[ 8%˕lLV(\3lKµ*M iaD TKR$Su=šSbF~&OP~./ <˲9.2-3_ꜰHնIz+4VwLQᕼBsBO 9&eltY(%Ug WfB Y%RI(˔Te%k*+= `z$ȼJ$M4߉ꬵ?rI/z:; ZĖr0AP`PYH_f5E&iBLXmaf: )( y M SFc)ul:o8lf9$L8@a2?|~! P!Q(e #dֲNךDF@l;,|mkc.VLJV3C˜If8daB1?G;+nԭN4.ϝoVj{?4ZږM u{qU؍~NM5.Y3V0ntk[SgZC:yB9pl $9P#(ٻ6eW=#Q"ò!j3D8.qE ==_U]V_D2t~h.uS YRDě<2޻IQIɹ:?5ۅ ΞX27b"ӎYQθw#J 3'%k)E?r`gG3י:HuٯW=#%ӅReqU_W>ٛGy߼~{<׳^yMn~q_~;[#/JoN?6t/NQ^fp ӽL^/|vmIq]x);fӭZ -z{Mf.Pc>\M#7včĝx5Ãy_'WdUĿ=;W@YĺݻN1/nR~IExnFߥgSR?Ɵ{@j4DUwQn~7w 7{ua|W}K?ӗa4Pcp|ҽX- y/F ?^z .*/Q㣋S`^o!ϻaлO;]Ν_WyX$F B ޣ~hp}{v6Fs1 ϧ`&y(\ /}:u%Ls$ p"e[1kzEvPйʗu&L LŎw-Lf?FgG㠠Ș8ALady8Wf2  &ƩyLbGhrޞ) Dp9)ϋ۳ow[*R baGpxd1BvQlrJSi#38b`IИ(VjN 6E!jxi2Iu)}(YmWMcọR)݌UVhLD!"wG"}SzVB1a`Rfĩ+Rx%mnڈ/M JeC$a!e^e*,k^:!Usuw{7Co/ٕwdhY[N;iP)߅:y^f'f)qp|i̝x5 & 3,&J'[Wwa-A x3vYVz 4u7uz*'6&7e+E*dSQ\\+?^'M9K'.D#Қp ui.HV1ڢO_zkCx!nxNӃN#ю^=_b5ͧj+@+izp qqܬfJTo*2[.~ȬB4)ʱŤl0)ˤ[ۂ`!ŝ ֫^yތիc2@t=n4Gǭ[ze8sZs؃!َ1o| f̚<$0zǘdJ-k:TTQqO VmXΪ ت j$>m$X7eø8]uRw{d$HƙUD;5TQ4:t**qlz0QnY7:B<67 t[~p۶motD7( ?TaSzZZ.K ZNv( dEKKk%jMFvoդLCU-OjVjFἯ )DAt=Ci ,Eq7rF9,{sʝz7{q{<] @Ul7c{'Q1 q?$$XF#T +qB #Z[}9B Rz 't2+Hr{ޓó&L'(cRj;`=Cz'#q*o_<+͘'QSAVi:9ះp&a62׽_|h= _r^&9{<͑@"BAׁbu T1 BiEU!"{Y9eȸW\*e`ԇN}0MR+Y𠊩U/w^&PR$veaZj:i506(Hb+) 0T"5Q:ǍBUIZrJ'&+rZ 1ey-ZZE{=@ YOcL"}z1vp==u޽{|݇F\@9Pf]"ޘ:kXD %[$z '0CH d:6L:O9tH;@b^߻J&GH*Y 6b Tcg`Dv 嶰?}f!|`1bV b䭙<~vyl[6Vmy']Y|K'wlf9x0?aᦈUʓ_ ;l Ck5OA jܔ \`0-ǀEz"pj:NoN4 Z# 3f,(+D1-{7fppѵGA&Ϯ2x-wV| L5 ^ b|=P$ j2 Zh$ezc*3r|d-`Ě-H"HҬy~f z匋 &\ Ug >\+2e''7USK$ݫ{T%THp6K"vRH V8D|?{dr ;}lz~O/.lBI9ZMz>"ϓyEIneQ=WIg37:p#`GcJt~M9q?D; {T}a'vґ[io?3 74t}o-Œṓ%b%[x 腣;LH-D`#rA ZY_ųrRG\UKY̹н]B\ or&Ddy1vf!eNWj" SD=EA!o!,4hb&gG $=LkY9%-r D|!K1&xYBH2`6U#hB*C8Cw‡SQ,r L?R)!2hAD B(QTqɈ53X7sBNU&_5uecv9oX?AҒh=8dL@(RQ4n 1 ` ( F5 ¢01Ż,A զ, 'ѣENBA`9 V4zc-uVY0}0W#Q+%UsM+!MٹiNŀA% 34ƙdĊ2BU|PkB^JNUԁ9RW"#X>=;ۚ%y:徜K"|!{㵵I뎿9g'ӻDXf͎ivtbt/!2{npV _{ /^O?߃oqt:\Uw= \}"pmIACݹ|op>!9;´SL$@4)?U2!r_"D,r Vtg@2$oG TC+jP %"BG 7982I~SS x{K{ҴjEݥ*e+D -@1K,X&p"RDl"p82O,GBQ2yl"p:ڧrȥ~̨Tf.TI7y%ENQ?r"1q*Eܦh,D;)cXZ) <_\M#  DqTph}:;gx:Sj\!^n_ocRq'HqTx6Lf7x#FMdi2p/d5cQpjcIb#LUK] dĪ!jܮl.`HVx 6G 3&)# Ox޵mcٿ6߼׀Xl`?` ]x<_z**i ݲtuy9[n6Wp)EwmG,b, nm= :9yPT1-;74N)G둚'β;tyy ~<]/s<3,l]֘^e5 O9⧧?x>,|+|7\=Ez﷾Hz@V̺&,98NGc"CqmVm=oG%n=o;yk` Rx̓/_o~ ~mH^N%Q:[^Lԙ[&QߦI W]'h\%7\Heu8eЪCZa!JVcw={-LjLJo>?:%BfB ?kyuSi:rO4i\lxe(啍 OkjJhTxPȀy>P>(-'J~:aegˋlN3fKa*x0W #՘~N#%Ak ֳ4&PCVdev?s`s)'>{* .[؁On)J9yMRr$ry?w0)Xns^TZI [I̚ )6@<Qn|2 IئCB6xA唧*|uIµ+ 4"9[U2DS}iuRZݧ>5[]K6Xg OP)]HML75$d缨71⳽d>l߇܄%Edך1݌ܓuL'n'Sz 6oe3-̂ZɁ y׬Įg4Sd2"G*`+\\gFL2& (g.brNV/w|29M.2Oj@~ ,[1eșӓl!MR|<ÛkhzyzY>1hlFGRE"OsABO:"=-z2$;p,3(QGGR"]czR nVngˀ42.d R][)eq Ap-U6v KmnwWFɧ,Uɧ޺O g"M*4.e@h:C,{S\ :J/jNZZA fˢJ߮WY) aݛtvA4Z,cDLV9b–¦3痂hUR7`rSƘL= yXRM*U>8ZE[0Ӳhsdx 'YĴBo+vbV|awlLhyGqqHV;O&}p`vG5al`ϫ݈?;2Ed3ѭx l7Rk@&XCLTS>]-mRu7T j > wN)۷=uku7\ Σl@? dFW‹sǁC/ 3VP~<:RO[U 1\Օp+^EJ+[38^tvF/ap痠s#|=ljfὋ׻?*}>GzWX])ФdTXa[ '&ӓτ}tHSQ.-t3͂wwjL:eiCZؗ?|\S}\ϺMs}]*믯!ޅF{idt6k8{gr<]=zPaJ1C!N˃ugQ Vʚ T5- _WN{p9io--~ÇeW ņBTFG@;Cx2=6.aԑf$,)J[axQ-p]Sb  6# fꛑ>ߌ :l)(tD$!SPW\9 ^yyx<*  ɐ q`J|?SQ2K:0o1v`r_0H1'ZN*#jԈƱ֠9ؘ+Q! > TԖpvJ&s8pDZ_K\EשEzE b*bOH^8.fRo0m)EpxYC҉V*q]tiq=8CZu|| H}0gEO kX):HXAz[w˵K8=+[*}5.`l"mT4 [Y يw հ/oz̈́zaF !&ѭvlwRs.0`AJ[(Kx-v'2nW )эFRYLlT͒F :Ρ&^'г憏<,̭͋[Ycg'I<]' 5R~?mx ''w _UTk許r5[,e[w^FoIPqYm9b[r7"Y?j8fgsTɈ!'B-P˪}z3I qD5PKMK/ ⿮W t/)$xwrS/Ou U1R<S]#6dՌ2:.`L̳$ Xy+t% l` A*n#w9#/F0,"jmχ%'_EI8A2i,>v̆ sbdkvHʏW(~B+g̗ -gP94^JItLNf$I+1o'6lDÑ>ɟLj 6}>$h`P{ l&zJ bf筍p I8R3yT{dlw>9ɬ8M ֯XWDԮ|Sͤ(\Xr^eG:]ARtL'W9up 6v Hfi\@9̼7$S.("5ڂY6KC `?O;ʈ`EHqB@!b^ 5vqƒxf3o}~^"ۥ_Ѽ?/3ׅ .N<_˧ŁO3/C^|>hf#DK6gQ!8" 4_Q3Uu> Th(S@jwq2N8]wʘq~.h Y(#0u7.E5kulǯA5ܜ`y'+oJ:.nZjAx֜GGiM:5 V8T$R2nIytx\-9K%vApF$ S, p4`YAjr"0Z ;8=īxRLGjfhJ–Q('=ӄH"Jo$l2Ne9 h55FbKb.CA0-E|D@uғŠ\a8+?)bٌ6qB iyK #&ʙ8SAo S l8|@讪8HlO}_)I6EA&<[bzWS$I߼,ə""Dv8* n9-lNc6{2at%&bߒc/r% 1h`mphbO&EjFwtfvivлH   n4`bn0,69 aUZi p "Ua!"F{s 2R,WIn{mk+8&Ϯ6;;i& oqf%ƃf:@ Qgg$?6FXlʘ("NqMIݹY]zᓛeOkj9ƫ_2,պm/X71,2:i!iys/>n ^nhղ&V,JbGY;( `+߬R]aw#F(c!#0/Z.Oo3EUJŭJLLdL*׭l0c3[ۛnv4/ȣn0lG  Q,"&b-7V5^4SKNi+PM, L\lla)GmnvH!6V;*X{FP}{ 0+-x=D V.6'JQIļ4 3:6[.],7-hM zmc2:MÇans}tl/wfdLS_x48{d2D zSYC^@fFl;ka *s_U769;rX:,PqS(TR i'XU.w YwmqOg\ur}jʚ BU^nz湡i/ΖEdwt[L`M(I*%)7LI,2uJ%YCQTkZg9:kn@ ijŚ*K1:IH$CD%0URFJ@.uT'j,]5JFz_+vJez(+Av Zՠ%:Ev #lJ61J1. ަFT5F(hGA[e?b{Oܙ7Ww:v-?{ Vcgd AiV$cpc׈f , oFɑ79W'FD[|:H02 R?;)Vl+Zh/BHg: &>ҼJ}EںZI/߹18+De`ZN6 h8&ҕ_ FJ F`dRoOېr}\)nɘbbrj&)SR BKc*E28:V-4R:گZ\oa!|_ 3gu({jNjY,nY>VmeII- Mr{hnw-Wvo}`=i;17Z?_rXg,P=Y9Rfa8gJ* uz&+ڼUy9YHZT(W=Cpб ~Pd |Lm]~Fm?<*T_ 1&w!kݕ//HØ<晬__urho0Z?LL 2YO^ 6 EwVWmeYPyɤ0xms+^'{^ ZP#DM%"`zP!u-ӕ }嬀+b_|T~cF\fߖz/ <7>6f,<ԗJ ފV+^%௰Nnt+MCҢ֛X!Xpj7p_&S`ߛLvDfZjvhFf~X*zB!)"#@e.H($ 2d3ki P4RhN-}umJ!qĤz?M !1ǹ˔¬8 (ْu2Epq(ǑIr3*R\R=Q8q*OVK .x8R8a+;4;zK-E1b'Qt)XPDc$,P"!LL )$՜=u?׉"9blE$JCJޙB5ag|#?Vx% m7E:8a1f"biZ|Vx2_&B\\QW1XsFOYo_ Uŵ/j1J`,Lw/{л{л#qoQdIQj#RPKX8qy(K+5̢)ӿχ97诛x8oYoZvNtZg/ф}lv|69;GSoʓX0+?>l~@?6 ^"I%<%iBm4J%Rs B!j@Li8Dn?XN-[{5S^=vRӎmL,Z@Gө/rhIlOe175Sl-xx%YDl#44J$!I,61B Ⱥ,[V pƺ쪢 XV I"-dcHo%pM>= wqcxcOM'urrȓTF#J\1ƺzD!Wx|74(+i6?ϦnMaLF eqf/=U`: p:0?qOV!A72j ]weeG5`h7ռw0"c%8ai^>@[gݗ_EkfX9jBNj=:64M: ŝ,~ub/5r.Q)~NZ<:! вP0;-ԣ#Pƅ.-*ܚu3ۗ^V]8nPڄESColY]^k Ť&8i>_z7"SZl?7AWul(<%! J'w~'*Pi"iL3(KRm5"PZR`QZcK%ݿ8M,LovC!9c!b"MSA14j"$2J>EFr$q,*QDR BD1%W3羊\S>|$ngm6^&BnbwhS^uڥҐ7WA.jcP)Mq,R8DP *1I嚡D&NVU 'b[ѽo';0_cfQsv[;vK~xzLJ~~6[/?~K޽|]}>XkO.Aٻ޶dW,<,& & /wdɰ 濟jJ)jmF vFPo>M&Wwo8+F_Le Sx>XGHӣwXQ !(ּSq0h\|$Rt+nu# m>A&eT:J5]Ͼ:Ik#t~*swoݱE^HP]/WdTz݇Ox4pfT%V 嬆',)ʵ<|2I\< fBi1DV3\ )hE= S{gQL},3O\|^P?Vڄ.%1Mki5{F^ϋ̌Cۖ3&mb`8lڥ2>{eƹƕlHsŅ C#Ry^))3MKH0y+󲧜Dn~Oz@ NJ t6x6J)c*J׬i`M sۡ7QtfNŞĝg"=A"]f] CZ18 $C΋j XuMW໓Y\v(õ}3N[d< ~:o Zjq'0eVA- \ =} `s6hLK^ /_RJ5n *޹:z\$ %ZlM1!CIQ}Nl24L %f#PDŽ'mJ1UFJTJtu ꑸJS<> *DzƪС Ja| $+ 7_8XM'_/7eenX]!!B, ꩤD #ÎN Bx-0D q6:MZ7Q@geC ,X;rh Zc_\e߮_FW6yQ$>R%x1fU:_oM.­E5&[S f\x ) wJ"$E\1B|! B-1P8u$1N`1ed" ,,6FT{5܊)}s+Umj'S[$`J +,/C) rDžp&:/V8'G>[=6 ^6 F]+9]ƍ`ƈw!jvمꍎE۷.;Ob@Ԅ>XB(3T\)b@KQ'2IDփV"V%6i}IF%Fx✑1ksTθ^- sXRb#u pXj0L^Y|`Z*$wδ(Yi Eƀ l +@X35)rPx) c r^uwbqcˣ`wRұ*nlm 3l'NĶACTb{`y1"w -e*3DM6fqf0 W4uy{=ӢwCcT/bڥ[WhBh \~~۴>}%Snnc~qfɝG#{l5?GN7Ǯ=}]^M?17=ZoW|rO&qKi10n=Xx;Dz܂R5gw2~৘hb9yN?;xQ)폣[^]>h\QrtIFS=~' WgqbDe IFNfm<;6?x`̫g{gceY;(s6ndv =[sq})ECKn؂G@R\aeA;+4@a1ܳf9Zz]f |X㑷ji+#sBg55eYiLYBN4t%DonePjGw=n{N5 ZwqbRҚz0*<0c\a,č(`XvC.Qe< !?+ ?zb9RZB9@ASJi)EύNJB*C$'(4tkTD %ea !ʑs(Te5ma'ϵ4I"z>,k؄fj)\!ap8sU3C6ϝ\GckQP)ZQҥOy{ Op;w]C:˙4@Xhi5ΰ[c`%XxvkQzQȜ2`KZT9e׶+bCKf<Р p% >!pfƊj fhxǚ* lÅ#JŎ}#O?y[N2 ;ߤZT OfL6JhU&{D4ȗ-ބ`Kܘe ਁ{7`x&T>;K29۬Kw?HcIyYxy#p8S^9  4އs~2`UyRzCwKri$!\DdM!<했A$H͖ tXP`;g&`W(CNq0?t4XHE{!hY]Y WR)t[wN XKz`F)hV)5v<᠛ Ԯik`<ߤIJ=7sswQ0zLIk@uiYOZ^(oWJ[}˹T 6+e .Ҙ#H"s:mvďήJF0`y F~ +W*! m?64'}^(16dU*/g:̢ݯZ*ynG5fGR;^h » J Djƅ8:¡B}}o0]+\W._ǼaE [3}E/yAkF,rXme D ,SX0Ҩk%ȕVstM&55f CUeP1,-6+KǜUl5ȉh'"I.I(bŠ)鬇U@ 'k!fx<7 17`s aȂS$3{r71SD?8>IJ)=nIGii,9-WF0L w*qT_N1"CN7^O/MB=1AcVIԡ-8淳q4gM}o . v}ܡcN5r4ݴR/.r;(R-eeDodvh^`uV0/ء n>="֤ԙ/6Qm61Jz{ղis1Ikby{`S4 g/!T9Pãt~(T.YyV[\!YJZ }Iy~vsJ*`uZs NN}-x^5q\UsWuyՔs \a1VRsoĎ{xccʝb̥D &9 w܂OcMeͨ NTX`~b0W[@HChphR/eT(O"j0c\BBKɰ֪Kld?'@bF7 396D1U7[%zqa8 s4(:` sq-;Y",a1w個\%X!QP?cА+jv%!Skx*_(WM wO.%/ﴇr5tMGWO73{Pym?'x206GO1+eZz=)6u+X,4<^N\jy'0,{}zmCZSġ]?Y{o{h蘅ٰX}(t;fDKbe~t4*aGýIࣟ0DfcqFS:{< oUp]g٫gw$a5Wl&]-׫֯G[):}1戜U`CVZM",A %3ZD.(*bih;ubb'n=Hc%ƿ/?Xc5VU  ̿9z (t A~h!0?iy=A-ayP y>P1ι F[t6pI=/9+A4sHpϤ9υV5+ B/Ő{aU瀸P EKB{'lH.Mi&~0(6<:7tVz\ ְ/J}u{u6t}z@f|(s_x(@_1񖫗2͈/wcDqZnz9q_&HE',2vR<-)E֛x0|{}(ǯk-wo~2oo'j10اO^KJ1 ;Yj*y9]tu\NTnJ .D0  &ad"-c#(xADVT\h 8q+uV)iz_NK)j: P"G~G\!A.`;ecN*uLcx4Hwq K-z}J0k1Un% tw^*M(SքvT<-M 9sظx2V,00M'1 A-b#Z)9^-XkJKA#H5!ުn(P?j9VCCٮ>TW0 iTWaXDqPΙݘ$Gt(]Xh%D6UvKhJg+wq \x J +-q:Jw]E-V6`LPժ~(;3} s9>ka&Au<@ ƿIU|cP Tzl 9sh|GK1j o`%iKlCcmuHaBMIےLZ=1[GX1vYA:Q)NnAK}ueד,ϞnGV >(&v4w@b~`_NVS뿀9Kx>`q9*e&r2qr"QM-hڬTPsa1 ;c'@62ZWD PöMŘ!ص-UAhVy^b- L'FŸ[%[R!- l$e!0e [BW + R8d :8M HbE0;"%}gBu0!+%(9IJ (Q$7qs7q`iԁObyLa,6~sOʗK?ierH o=_7 1 95"}r UNŴMu1T+x07~ ٰo0d%FĮa_:\P![mnblĽUK4WuՒ;᪖8Ԝ{58vJ>s=|O6'ImMS][uε{`Ƃ+j)unt4`{6{kw~0ϙp|8zuZGlpZweZ fch=`M7MtӞ5ݴMcW2mΙ&a)mMl?=2eweU'T!t3CgI5a+Tû9ygw  #$v+T*B*'# 40 BKH\FׂNSK/ޡgPt);'EmJUleFxxS) 1Iφ.C)Z^`fаRZzk)q!NJ8 pp Ot4Cҹ(tYL!+sD).7~SdD2k9YՇj9 JRvY3v:~%?6Riց@!ݠPe:::V26yM'Mh_);_)/(]}={Y i!UH=7I妁(QtC\!jMºV A&mj+&ʢCDZb1 Lwf0Hr~ cˁ@U[Xp/PyRRzdZoɑZ˧;g~2ԪQJHͳߏܱn<9#pK$fd3ĥz(~ng'0V4YK"x^9liy$ѳXEaE9\$Ģg_9NfT8nʮ܉fk SSغ?k1i7V{HEKC5٢Z^{{pxcar)X?<ߍV5,F1bp[ZtvjAJ17]<-7=ה]A] }E0Z,l~& 4?j)=a3K\jHBp͒)Aoh7^ޥ{nNu[tѶv?\LnMH.2.<MʵM5]0Mz-uoLAQU-KN n8drd.elSqs{x.cm-ZnsFfqp{)Fܜ~D3Mnw~:=IV4S2e $~t==L)}t2c},tO}E,kOcNK@2(g{8€TCmj/shTRhtzܬbuE}20epʉ:#3)XZ:& _p4Q6ޒ_|vGv7 z'8 -|oFl=fӱr\-TkRRvʛu#oFTC6Ի|L'nD[}d4͘@i.Cs7}L ; 0ZHzl8j%`@ kVHگSmxYFa^krOl#5 r}뇗RP'|⣟`.w=""C84|zwF?L㬜Szݖ`g!<_E~$0 /{T5cZsLF_p(Ǥ0:O3!dU-uܫ37Z !Sgz;i90d,m R. W.FjGvXZp.9,(q|"+K஋rôC;+/!p.q,D. u@VBT%HR\wW @Sʑ<@0+`$f=^Z&!nln'AY1mq3wADVEÙɆl0<$u„cofbJN0CeI0ȃJX,weL$3,"$ۖ|ٚ+gg1o=;k}>ҧG|Aܣtq9JPtwkFduT.{#JY }M|F6w7gwyuGT7Dz i@Ӗ~?AsIc{M$o^̵nHi5LTVImɗgՄ@w]VAбw2~7NIgˤn24PY=55'6yc{;re}$Eq8Kj i>k8xM3.L%=oll>SS\丨ZO[yz:OuCauZ;:\tTY(.ˣRVGqVaNEns]nc:_QiV>A3(7=ܐA LjB']k!+_0NZ_dOPti` /%@BNs &d^8}.Mn ZU"5Cad礛U;ۮkzb )E{/$DZu s\䔈U.ԚIVG"O7j9e_V:LvEWy=xd)J F]￟?=l`.җMƠ8T_KEU!sɋg?)l3i5`yc~.CyW]嚭_/atIEs/<^yAChMU'{7h -I}Gvyt*xg--һ5a!߸oSL+f .FNy˗b]~fY`˜l2ȥXH?^+ v4–v915 Dhy,M䟖Isz> Lç2AߔR1Q&K')/H!}[ kU/-zYRDrjFvZsK. V5l.s8d6ETLlY,m6@(xQn%_Cl(4X9Ȼf]|rp0(cV,ɰT+$e:=@V:c޴y+Djzu+$+plm(mkµ#pYxVtkFuފeGviv/]]`RPhBNݳHlqvH 9=}-(Tڽpu`ºSx6lEv/YB2%~BT6KkmJ8=i 7m2mSc֒NBEUC ջ=ͪmBTlPʇ0Wi:(t? 銫o7rs*'f86w!iE6u?LR{m5^maUKZIJZ. 0a|EtXѲcŤGSoao@e,=IW iygqN;qΐLH\nFc֫5ڬ8{D9!ځaM/!&H<ޱFS$Ο^ŕuZE=K8):%\u[ZہC+oP+0pbv@URҩ#834@9ڥڡBATµa֟\yt9Vsu.V v0=08]ZYawZU4:a6]kP&/%T;vs+ I`8NsăX*`}rxt-(Y7ߓPZ#$\KM]\`69eNsxKa9QGrK'mNW 1]c.\$KA:]'˓)bI \W0X]X߃Q_ l5L*3ٌݍMo͟YA+fЛQoҿ lA9} }ͤf4{dA[sCT_bGKM`Q63=;5fv|9 % X*!!c hFI33S@$\KrIhji*ߵf-99_&7_~yb TNr If'E#KGh[*ѧŶrYJXAzk8`Bv#Q`q<ؗb ^̝ D@&7VH T0#1r U4f0&r/'|*%>Zk9EsA$FB\< k(w!As~Q`c8G`!=u(EM3s1 :\Rvܖr*R3-):+Z_' ґ)vK ձikɉGK-H;-zzL}Cm"jǶ"֟-oTͳ"uij`/+RP r/HSOiyr[z>O{f2? ^vEOߡMuqԨjr{B4?HQG2Q=v`:*=/CIw7NꋾEL\Uv$58BsKVYΒR|iؾlQR%6: R7)zrH_ kU-X*2&|ƽV*ESLL})'2.4mWlƗLƇMI*/w^>7ݜ?2+ye-bA %dRV7TjUd$R,r%05|^;.#OQHPNqg8VipLe;\EZ[-ETq9`[q/wx](f-`bw-͍H0t )H(wgv1C*Z "ew{}HJ* ԋE"4XD%L$@1E_>yNZ7\GkdD!p>MRYY8P4z:;zXJ~{5<,\%++ uF3\;3~o+feExwgTG[=HG/=޺սB5Daq|%EO8GkZ{- ;.Ӣ#XjQ:B_`uӺ/^}5 љ%/#TI{׋Pct=M}+Q\껭 +4F>A,D8d[8C].q+2[;=iRcDϨENMznψO;[|'ψ =9gUอR7C^v@x]qn,S-7Zw?Dh̀F믋-ȴ/VzH$uNѺ7#,>95cMo\I6I[jJYSJ4=i3o"-M(Dz@P$x+uB֟l&-A[ CV i#)հF[ 4w}nhT(,WJ*u]J)hTуri$=:6I55GYVxԒ 6&;u ^:GѸ믚;En{/#{/' -yxƟ *h4E/W@T7=ޏ.e_=8*7o g翹ivzyy/3]|*_nXg& _nL2䑦5gJ4(IR8$L)RZ&'jx 0hF\k9X yRRa +酔 ZbKNOM % [@߲1y?m"J ;1xE>ijnTPN|"o. i#Miw.٠9"}J-##f mE=0:"-z,=H@hG`s{KQ^ +pUVHG Cx7n9>ʛ3Z@o-਼*ok@4pհ:O{d`0kږGLP2̮h8{XZ -T8*86x4+ W`.|Ja'iga6ib1dQE]uYZeբV-Pp7܅G!,"GA+<7ϋђWGM_t1)hA6|(smd1_s4 >_k-;Zȡ^ſb%1lPM`H`­K8t]}(|>neEӛ>ENZh3M|e~{ Y7K70LhzϹr,+C!\d 5dnުmZV>*[#08?g!WҺL`<+Ғ;-2f ,J(f42Ζ}aѵ5AjyH.YoZZo'/kNR{Hhd{) WH(MkX\׸;!r]cZؕyr_5J.U"M1qnR ʒae WdN:W("dYG]%m{h};b5ŽU.6|w˒"?<#̊.Js>O+AOXOjyIFkKD r #?,ntƌٜszZ@&ʭY&c"22gVؓB+e8KTP8_!k rKh  Gpe 0-Pj%?{ARdU5gٛg?suu0vLs?3f.Өc<#pFpjv%ďaFfLwG|YB(\l" E_yxhͶI?(wcT=IGG▔+HZ$NQr b^]Q/>X#E3.]h1EZΘ+Ų3VS4dL;ձ\mcڔqflW?ICAɮHUșdާZfiOЖ t6]o\ ( e>&Ȯ%J)[ /V:0֐+ ZʠKE^ㆁ$Rt$ -m L~O5wra[emr@{"2/fU#ttL.T18 Z=ۧBL0KoNoÙ`BW 4ߜl2U}GMʻ!T=MQrYltfhg' #nəRJ~6[a-UvlK{ Vl'Jbyٲj֨72H476l[ UXufL#-?Ai͑OEOm{ru[nKU˺-3X2G@yd  9)ux9rΒ// -V/閄4EkW>rr!1:LYVLdEmJ< 0^y65PcU.+Žy `,\DYĄjV\,9 GneP s \_s=Aڦf9c" ɲ/L>rp\2B6g"3%gM2q4nJso^ U%mC~ Ev 1 Ȣܱ9.DX3KlǮ -fXFH$V9|e L rfpڠʵڵ4M@PVg׻6tclFfO{Ja MlA,Pj!WK1A \|F:9\ncgG.V;/'؃:)/~F],~ٺws;3(JB)r,mFM⬶頒ꅢ'fV`wu^ԍ _"bx[O1 T9ߨ4Jz,. 2V΍;5I&ɶd gP1DX)@!* Hbԃ(f ^d+=jwMHvWz{*P3R]kE61$Pp++U?.|w]n^)5ӵF5ZLM羶'llTlT [xj5+GQ)NZa1QO-bi~TdY/V$ۊVZ4D@۾J}J>Q]x:b vtDh\N5WVHhTPQmI<m Z*'(|.e4H!cZ"x.~ȩH ] 0QzygB;f>һ 31Ȩ+E %FF3L#eWtCqU;[)˻{3^ڟ1BDYHy2fQi*'P&}ZTj#%*s BKe 98zn}F3yh5TReN0u:0>Qv28X BV2U87'BYU,6CvQhwi1[{-L2c`͌]Y0K1#rp6眞@P,jxBkX̮urw_(^ѰDDrEbYkRHD)^ r`J} /#"m͈ird} LwH&z\ Ƈ[CXv,'=C Ap4+hӎp~4+hîi6.h>!{fP9;w Uco J j쵚AeO *Nyq@ڒʂy,/r~e"^j>@35/&RJ_E;r~LLm:ԗ`'ٻ6$Wxٝb݇"剉:,l} QݍٲC)+ˣ2*3'͂I咴dϻq r[O}.7$R +FEV->$M.͕iY0zd󧹙 ;+u) PuF'r8:yd(ff?]].j[zBR<U;L"VȜ ڨND*Dq}ONtFF$)C".$7:4VI ˫E|P Hne`K_-{.bL?nﻬ3)ŤeA _t2CGV П*!]ClH:}}k.<1a==4R J)aޭ!q\6~Ma1Q93HAA"{HSL4BQUz 8LGɇl7wq*2/ݧE$^_\WBi*e#%T^l;TysěJ sLuzT+H{;ʮ /f$NjcǙ!Z:3$%?E2M(uO!NiB}D֐~< :MO5Ҳ*V6l)Z4gvb<bK1I/T5e^Cޢ;Q0 T3;uJ< p'cjG5P&ڢڕ,)Q]xY bx˶yDzZD3J[o+[+d! :/1'}#&En2t,LFMf4aB>`͌쌑oR}c~3;dz hG}z&n+6|smkj,Ю/oo/¯z ?| 5W(KT7Evl\CBn%}};'ܕj=;aBޢ/Pw"3 MM{ Nh5_nYoYPHtLjYon[ĉ=zgAi@ogK>S_y[!j9l-(h|NO"[󎻏=3l.Q⥓%-6 o=ܗjU&xQ!n6ILN g!y0Mgac2mPJ#w^D1рg-=$/n Xp7[Bz2 .F'lt{c_oc@_홅O%,`zabik,)/>>][p<]Dӱ8OCs'._*ݢز`w=nwba ҳX<'-ů y"Z$SkVY֭)]u됧Ȓm[򞢙֭ y"Z$SIAղn2)>u+ GtJרc:$ʤr֭ pukBB޸ɔ(a(B[)9SF!OӤ.mV޹;Ӻ5!!o\Dd֩N' 9!Gӄ&CYKwB/iIITsJ 8g/tzQ/}M<~풗L4y_z'`yYGɒI68y$뿿ǿ,?{('Jp 6&] I&GZPĥ 2BUR4xS瀷{pT1&F+5r`󧦚?/k:|)vRq%,3Kj$"0A-YG' {( ԿM!r\d-$ZI$Ԓ?rVD8P 2Bs_؝TP<&0,U#'3U>G?ۏ-Ŧ_OJzr? ![?ul?oBv&2S;AqXv98/eqw8.#,5 /Ff'}ssys6"~c}?g|#Tng{|w7t6'___\]>=fG7  O\ #ad}iCyBi{:S4<)*(b᪱Kɧz-4C_ar VAsW jnO(P|UZ$ ͔O_MsqtqJƋJn\*ITEwWЗOgfUWLXk7Cc,FCܦI3gMUOOHgfG QW[NGĸ]j_ײ֡Y1%Y @*ϳ}1p6t]元ZsqҎRoyᶯ/W:fxeLS8|<$'1>ILO9)CygASdeH]yL9I K]P.[| lDRz`"=RL$fJ&1S2I)IUO`p9+GVO H/hS]HeZPaC.Tz-J18I݅]4N~.B2jFO?*'zxCPU%`R)) 7 õgm7;!y`!EC{pn}TahRV*9-ѻΪy6hc=FVH-K(\ep3)ォ0sL9؋F/Fwx67%[I5F8< m) ƕI!q1 #%T%]Ęs Y`rl{vqJT"S{Df`2*Dv` ' i"kZ] ix\>&eR1<,qD'& #\,02e;3Z`*# g'-6#Ax0Q:+2=牑T-Ε `f!i>RVke ޢ.}!͵h ps$f$.~bA܍#Q́9OȊxO &* L'>f TZ3?J;hVAi GkCVQ\6Qr싒wQ x SC$Еڄg Uq=ˠɜAzåˠ;/mCn! >(f=R$sq_bVJ#}9Py/EY#>:)g_y'Hգ&S*H|ؚR)f.59GAhW$[#ǭv6ApN%ZLrxt-jχKOv\ 0}mhrZիoOuz9%d)`/Cx gN+?xͫ1FmW Fn<]\`d#XsEkik lGbU;WCi0ǖ[m]R6-uR.ݺ|oQUȅ:w%D[ Kh5쪐ϴF* 1e#[K,cۺ{ƶuzrchIpeOѶUK wrtkwPڭVu>{N<h՞lw*ح p>ss+n7B;ngi,ta~rqy;L՝4V¯ZqY+N{V$]F^9\{6 Brm $Z.6?>^vGh?~,BE;st=u.8J8Eq3Gף[ẘhʞڑOZ5%)N`k zEp#8l=|%kTAR\O5{rzXT6Q@uK>r-xj.3vqXwvӄK [{j5,޳ ѳ< 37)[.E UI pk9k/Y 8DZsv^˵G;LctW>0$%t`ԭUh΢>+;'{mfu_cN֏9Y(]Zp#ѭ3"8*#]d5g#¼& F KޱM>f @NPsWe7UVꆟ‡tT 낇gd4 t{n/]GD\ K3r &nYC0z:>s|Nfw5OdC+Ps)մhs5H[ɘ8jGLa(iO+p;6yB-F`ɼV6=gJn7Rݍh]dwDdE<t9 YBݳƳѭ3 ߺZOg9ݺj-& j}QsE'Z/( C1r^lkG$?-xVqC[` -P#qZ 7VYpoJ$QAu1FgHgGDUFMCK"RCKGam~Ng,ݘLk#!A8Ǯ(P?!%6(UpQqbB$R8FXs{;*Me`7p7޳\.ˌ*w֝̿|igj(v9}A *-:ϮD0x+( P_n<,ʄ& 4Ib!11)ҒmJ"Yg XN?fKuݽ&ϧZ MgK`$EBl?i$)1$(1F:)Xq@M{vf=gZa>Q^۲l]2Vڻ{TK:RW17`1[cF8!O@ Tm8O֏oR -G7_7Eӛ#duGʷ+dm2?Oӻf~8 uo|&QF r\l#Cs`,݇tb1>Փ K.*%a(\E}T X+S*zyC (mG; 8QWs)`RN Lr:ʵ?GMECWq>i,X:_ ]Js4jѹ籥(Jj..VJu X@y6l9NqvXxPX[tk6:PDS% ^MH\7l)1Yv qξVXF=x b{Η[_bA_2XciH޳WXYuá׽gғĘh}\٧ U0vޙw&՜|n:1怛 [f7.$|$?,o1H糇`L՞9T0OΧR{$%~*oZ;#\p7TΓ]lϻZlX3w;=1sxb|~3?KLpk6KH"2Iyc.Wo ~3/_ս|߱@Ofbٛ<%SW!YZXƫv9)p6zSwg-Jv7Ucb-~H.>2E(}бvԥ.'NJڭ.Ibwڭ EK}v$wD١vEt|QEujy۴v󞨫vCBp)*t&MR|:n;hBaS%۳o9j:$ e '3^Xg3g7Mzsbblؐ(DacED"iq|h2WYf8S=ˎcB5$'K{FpDcĹeT8Lx{Q \x[nB~yH zf5"ǭN=(|Aߒ~N1TM" gK>/ aE$!X_T1R6LQ՘(W5k/AOVY fᱸ hͨuLQ}Q+T;_A>̧"/V%4e mdD [6iv8爠k$k/a^MLhVlxҾv/=j267S_m3y%r޶ %7Y+C$.û$OjTcB} {g[-;.ak.?A´5R5rE Ceib4p5kC.?,+=.*y4/pme{Sݥ0(p\0sZ@ IK'K/͙wY/{ 0F0ԏ/v#\_"u. ZT =;RU(Wڠ# hC֒\ΊfrHP+Zz [%HpoFA&Y%vc02I?O[{~vknml?aBMX4Q`L*b$I4Z ҉*2 yDyR Yb>uW@ {U] V\T7AljC8!Tt99-~m}buV- />)2_ AV͕jzowpabp5YSsj A?:DDBi4JƙkrHʩ;M8nsX{ЃmncdZ#YȺY{!֊*"&OS2& KE"U(3,4UD+mE~ۜC: !M t-v<<`(DW% :0eI`SG15<2, MI VD,ÈRMk?(~ۿfXTίD¬D+GceɥLHR)7RcRzaiRTUzbB 1$m#Ԏhb,Z-ٝr1ERN@㠋TnA,9L(0BSPslE0I$(N H&PjP} eǍ`Ң^'ɒ8eD 0""P" 0MD(.c&t16:QF:R"{䈉Ҹ4A*+M]4KBx2736 alx$)\\e\V{. E>_Ϩv*ۢf!SYF/Q :ucmֆ5Lz׭&ƟnaIu_Amd+n.lFj}wl|ۧoܐg7맏dd2Zdxe;g?L[o6t0Fw?xXnwLb 3 Jݷc J1pv</˟*%RHl鏫eU8/mGGQ#[n-|RhSu.hi͌Ll,33v!.%<ţ)#Lrx0r}l 0φ#3z [u57l|\-oߙe*O' _+:oˋ&p,`DVwtj "jaa4K>x k賃w ]6٧z<_AkrTƈAI=dR$Ct2 -ZA㯂'o,!\]YB9BX>Yn8ΎөU߭Sv[WQ)gH+8^%y@ q_[3Z6)zܨ^IZSUjPIt9P{R* $4d([Q5l|4mW51{Pk2x=Ĕ ՜2nhL5mF-A9MQ>{{ E'_ݫI@+0F|h5wvKcmWWwxyɌ_'|7N~'+-ƭ.nDڽ n)}:/0 K9W9Z yI8\-LVuP\e:O՝c#ctӔ5k5akmH˞vWwŀbb$H'ѷǒ^俟jRF")p.J '隯H1oAL& Jg# 1m;=&582%Fe,Q/i75555oJTs1sQ2gUGNy8k2>0v2p9Gcxqׅ:M#b4S YSKJwj<-D@XCk~aTwe0{,7j@K_t4q_A)QV-dQZbXB i1j\t(5Qh\bd;(tW$= GK[9fP'vBEC-HVX'LDVk=(bT(b] cG3y9A$PGx6xpks*DK BZ2J @C֑⺶$hq,q\QJ /a+/YiNĞdWu o@˽=Ogi``)N!6rEcr ˵8p)'NSkD&uKHzBNZ(&z/w[P?Ss1ȸH^DcKW$ku~sS3!ZFPw`c 4]9%נzV. /;]Sl Ѳ[Amy[%*Sǥ^DY5y](߱mԤG co)톡SDkz9)}O+ŝ~i^kP.śE :>[]@@'x9Yj2luH@fӒGo̞}1 /C(RXedb5:@ V^yQ! aG0'-Z!>Js^ދ枅7LH6p!Q? 6V;6{<>Dk &Tpw#o,o,o,o5 -ƂUQοZYkiFVKbtRX&Q .ԩ V\~DTqtEMϭ2cΠW"GuL46?|]hyBmT.YLS+,aM "X@jh5W?3^,4bU%dp JSn4I*3UU5_W8?VkXBSȔD_%^{$]d̈n=V&~TR7fZI̱#w]ƄaVOmV 4ƴU(R-B3\ƧiQ1ﰒ"4r==/,>%)؁" 0/,c'o.Ly9[8|Vfٔ/ 5#QC Z¾Mu8s~cp+Jw'Te ez1Ґ[ .쟡9Mu"֪F)ԓLhDn~{?7JU8hɞW_Ƚs*͘CwVsTH BmFK%GT MdCj/*@Km$ns3۵4Z,GN*k[ycI*eV%X- 1%䁋Ҏ5brZ+6{Wri/0[8]#t깟#B.#zf"O}vTᨄ{R:X .mG6\5o{+Sp(y rwjxJb/ xѷ~|HY7Aey% +K z1v(Y‹=z1(g@bq_C?WO-=Cb!B P!̀WY3'=#ޝ'+˖i;G+ Փ/n[i^F~ o̗ΟEiE3"7yL0acQ^HE6g1vE})!Eh40HFI0'x"@}rDhKۑ?D-CL.\,gz蔐giv3)kl>":^V _V}3xa 5g4P[Y%9ޢ?!+ͱklJN;֘zfCQtvɳQ\ .fDdB<gF +>)2,nN&sHAV >ZldruZ̧+Q[8tI(SU"JipkGs\iӂj͑ ~}AQj,ghsE?JcW/ZAǛ͸)F#|ٞm#-LB}q48-WưAQǓv#Z059zb-9aӂZ0xV_,XNt!-(atJPBIŏU}gTo^tdKAmoB1p;[hֆfwjSzh_z-7 %0D[bQ4}~1KՍɋ^':DYNWͫ,W/Nd+Nck)BuZFI|QdwJ8,i(јQ `8KJ1*j2-RMu7|S+0ϵ\+0ϵf}OƣOӯ[Edd>($Fw Y+U|s~p`$.śYϺ.b㕻=ҩiYZkήަƢ]΋w\a9?;:tId)Xٌ? F&6HҸQYϳҟ7~Si,a'LDU#dϏ'О Nל"Tit]ŒJ| ,{Y2XS"WuHst"Z!g(fI**R N˵;6XYf/MԱBT]WnL?[M+eQ_2(C7(/!@)!F6~I Α@&ogW߼%`A򠊾+@E ЫtN vHЭ=ё (ƑȐ'fsDQc}#y]sY =!4w+Q6FRRpֺ]++Lz={*ZJkf'aI*tALhzFnsUF t/jpEhY IUM^SmY`*%ڑ.`fgHh m3bXH(o &Qm ϯ. kׇesUK\zןŏ@\[@K<~G\%CpOX,eZ꧳sr U1^5,/WלwDK ?3^,4-rm~5X?|g7Fo$8u?r.)PI_qP.$ɘTPzĩ=X5Χ N9R0:$IGb&xZ?Z`(j@u64J~`^6ކ5ʮK;4VvP4pP _[q eIɏb.oBnW}=ނ2CLYRЬ}GN]MU4,@2 \wQy0'eVv 䙯V!N&D!FppWC҄&?{Wȍ/wN6_dvf\r_r0l[{'3]b^ZRE#@Ɩ[SEX,֓s*9XS9͌Ni &meVLhdr@33~\AD,fӧw77ѠP関avo~h9zƜ]N~p2U9e,F<p9I2˘:OX1c# ؚq5hhZPp5ԣ~l0h[3YؑSqA4872. Ꙇq[i+T'"!2Le& +/+CE& ɸ&,uKRa#lq #XH:mqVT"řN@%45Yq …DݶGA#`oʭH:=XkLj^3,aù,o. ?.$1)$,5% eX)\V&Y%XLkDs#sZ }h)chM9w8F))tVÉ5kE8梣P<ޟ!wF5E28+Wx+GRBDa /EC?2]+YIAVk41kU&k&RcbV>Mf!&F0"T VcbD3 &*jF5 &`[Rx$B&b(^71i ͖޹I,uKq$AF= UkmB`-1 67aQW0DHmnK݉ 7ڔ󏺴:+ٚF<A$sŵ3t0ilqy #TTQq)7H3؞u&|ss0!nӰ!Z,%0! sUMC)g /PL1%M)ri)JکU+֩ED^LX%:Tyz2Kī5b'WO t䧌)ҭ+T=L1`0Q&S\Hn6;F.9gݚ[4JjB.jp~h=n-Z'Ey#"ZK Pu?n~|/ Uݼ"z3fqٔoO?yu٥c6?/ߣ1i?Ӹ;H/#(%wz^=e,^xfien)K$3aܾ?7sFqsE1;=vUQeAhBqJ8rs&(fX |L'mutF٭ M4Ʀdv$ҕ6o:H _KbBۚ&7DF\זHuѤ~41H3:D: '59Lj-a{Xa^@71j rĞk?>LcTez}⇧?lgkJu:ZyVwדf׭^O>Vnn0&П͒e_nc@fՇ{{_ ۫ UkٝpE#MS$5Ȧl0E"I37._WUЅ<9֯bΐd^ʁ瘈 awŰ.qG+X/ϓC,lp{u)Y<қu7 +fWZ;9ӮޗL#+7΍;7ܸ*9"*K]I_ ne"cRmeZ ̒~(ŗ<(fCyZ1#ϤELB ZQ.2,G'2Ip.:(7!$VdQ`|q)),Fix32G$II΍1McK%L#V$Qvitp"\"}w+G\*\|TBU*SuX]^C杔GRCC{U3cAzѫfRmbK_uR9BbC^$6͇݊ q1(b}iU!?DW(z'XjJ9Ge^FbF#HSև͵_#[Պ%J+Etg3 hpԔ)qNjT-0Sjeͦ+^eV<%Eb՜*6=0"K̦Biu9T >X+U*sV^}pYT~/rnE.&`-RaXD9'V !4\f(ZIXЫobt>QM=s9|9ØPYX0((t{e9] jeNѼZmp5!8\0:.w}:Q=0Pa* cb"FL$x3LdAryD.[.Ss"@ pMeq99D0kHʓTbhʣ UMlrS#Kmѽ\¥IKKN߃JKFbNEJɭd,}iCDUwTUITȕ'¸| ~o^:fPOFf'2۷Vº㷄oc.2ć-n5Oˬ,qӖ>|A]7tT\@hsCL2#4+iC{u'| >4lq+SJK奖/F'SMe/C >K.Q>;X|x={]SaX%F}=ۻ6xM[604<õ-uJ֎&ڋ'`Lj!D0D)6GJr0L)"%9GrJ_΍DgB, KW$epJXr ӊ$MQ$kX)ۋiB⃢A r#Akmβ/uTGw/)'-3NU8z8-)[k\w8y[rLX%WUI"lof 8JÖ́aI&{7@|ol l+h+J/M8ǒc3AU' y%@ha]lyf^<j`mm2A)cYUL!l` nv*񘨛7[}V]nfR*DkM4 FÉ%RH$Rr4ה J2B؆c?ۀ%HƠ c"5XH [AFH+$(#DZuxl,K=Ę1DǪ$V>71juwNMۏϫ/lC|uNN:̤⏛2^;MOҒV_Iv՜$5G' X/F:Hl)2с_AJN[D`F@P~?N66ŠEp؈pBYeVJj$ cOKN)T[di1!-sY$Z`ieNnR}k!P\nmGs]O\!CRí8MV4)R&2͑"9F")Q ѕ ÍYSK?!6Evu{AtcA0wO긐!h>5RF*EcpcUօ|׊ 2 Fx D]ۮum ظsM$HMՈ_myA)H: >wNgv)&Ϫ!XwdGRJ5\mVb Ɖm jlto'{Z`1+;\nC{4(7. ;.`#\'$:4ҩTe?/iO쓊P[ri=6ޮ*,sZoDlyuz4aD(fH L'm蔱f٭ M4ʦo:H \\YcC & 6*A#!00Bj*QDrM+o瀹v׸yvȕverrBˍBk.ڸuHݟ:²߿:?rq3P\ 8ƮkknFǓU~88U=Nɞ=[BJI[1$3ZU-QgL 6w !68-Tx-t1v{E2W=pe/d36F!Ӗ|C%,1)L}C\G_ tp'ߨHE_7ovD8қpvr3|[>8 {7Mnݗ'gEu,,R':[֝x+RBK|gI1ɉ8P2+N T(fjL-1&q+kÍ`RԈJj&H؆XfY -'Rpę{ƜhT RM0J Oާ?m IXe =K& R5 R\3XWLkDk`C,\:/kj!%y5Ȅp~˧'~1J'w1x!ݷcT߿'$z`3|rC}W "bx]GO~ɤ}tLz\Kwn(}烀m $bX<=zL0Ҍ8٤o\#mJbsɨO,CB[y}bRj;.J0<eJd:pmerP ŠUN$| *^Oq)CbB.i32jYUI"(k\U!jA䆈!DY)V8+*E4"Qqh床 4Cqb|kk>a %Z sb:T@p`Cd5\Thq[ ž`j)TbJ [ A9SESb tpgYuq}vyoAoYPO?/ytk`*F %ɱWE5f96  ^72I{ixO˯+!U"Xaomw3ŷ/ OTFjJ0PFjJj:@..5 &[<$H I[Q*5XF G┹v4p3@B .5#-2εJy3i?5Ed!G?o( urKh՛Oie-7nɃo/SBd}P.Cb:vɬ3]ϬE[k6.mȬ3ɬA-@AmɈ[zD)%DjeVK7FR<[v|ӶMlʀK#I .̴s5hmleTyK}8X8TLH8A^XB :qfk9Gj8@P^2P;ԈV; VFaULQ80H*VJU62c"Akz*Tё=& =.Ig*A0 ` OL0PB^fԮ ;.dB }Xxa%OQ qժh2Q4ybf~/$H~OҨLxYVi̥ ѱ@Q'23h6M7~#I8 cǠTT X aSH%xŎ6EH)U&cNP8KŔ>r=;UQcC(iYii1euW>ȧ+hR V =1ZG12T$ K:o59&}zcY!thãm*tQ mւ`rl7g~Մ>B`q,{fo7,Zvسs=8xrU$&='Z9Ƒהq-muuB0+8;f Ɣr<bI}&iAk0n*eT|3piuQQ_|nVZhADq&#乄bdTqIVhg n 敋scy~7*-4NV\aPgTI# ^SmrDUA\П__nd RK7#J`YkE\{eDeo. /˩k*AXZN"UH!m(O(]b+uᣊa Gg>x/DE)c?GɅpj/K)aSj.q 7>KO)!јsl5̣#,n~3?lA!e6w@ HTXylP"m],6$%T], *@;@+P#6|X;{a}<{q*@֨҄QǠi(M0 U% 1rɩs+J[]h]VU^OaŦ ;5P/3֮Q2.Ir\%9f0e_  (uď\+$rBv)F%ThQ%ukޝň?筻&g)f9]J: t'@%#Ou7~*Gf"M1ϯC$71fȱP{grĜr-c;'^ܕx#p+9dfݻwrfӃn2t(ќʁ ]S]qڢ8f ~UH">i3"ކQfŶQJʿzCSPǨ!t|˄c2(zq5}$ QCT3@;fh%/m%մb.VOO<0O:'~g"` teo]B=}ﲽכj('fNaos{s^ۯw3@`ɫBV(Ꮌ'9Ӻw)YvZKIY]d}hA=(=[(g-xRGwvSFȁYM"w5USf {nIhEjwҌSCmT ǣP}X]Ӥ\Mnpy$33HY҇K@KI0C)gwHԋkw#/'[-TbK1z:]Dp* ڍ9&a-BoXOۓ٭]nSHON!&-_ JJ_'oV~ᐂm~[BР7pw"y1۟N\iD%;w,vռuŸMy)n2>my~ ;szGBRT8gWB^fh'F[.)[G6N-J5wOJX+7,ƺDt:j-ɭw߁R-zw+a!Dl**u&q~-ɭwaЩλgKn%,䕛hM)c얝kwU GB,r"1v륎. Exҕْr%e!qbc V攎5D8cB`iCR8IVr[$| u8m]^KK~CV:ndLinfVy΅nK0a5ӀoqE\1ɟ?OS^akW8o./pݥ7fv$_>8ylW[sfvvyQ-> WRJ8[֝xc}ƧkqElŀl'vfběKW[nuGRPJRIJҺtܭf~yxHK~~b79/ֿJbq4BXJ'& ~XE1KK-7E rmv4qhl" }Z`ާ}Z`ޗe ` )mPd"e*RR?8|׹6O1R̩R-~Iԋ`<ȃ,ҫaI!m=>OCRl KC):N*wL8LLjNʠ}l+MذJܮ +a\HXA.lZDL"G13qK-"""<$"WUZJiE{[AR.D@,q$`L`\Ky0U*b-apQ x H-I£T1,L‘hh Y]d@$Wn@tr %ر&';5Hj95*LWpqQ`dX˷jNj"WLP]\ul-ʒ t7b`LhQ`vU~㖜t(=$oU1\I\Q0IEw@+m贕E(ʺV%RO9}| fK5Ig9Y+-)-qX$\H2]y~^e[6=zD9&80HM0:J02 ouUkK!/ap3/r=L0,y^<ш`9B# XkП ?^݌a&珀|6E/Ň:p5# %/l-,2* ) 6ܑH=T4:m#M@+۾-AiW&2’Trʦ#9sMjZ,$v32;gj# r [ i \ [uVOaD;}XL4F54 5L'?o"vEڢQҺӱΊsZ.ա)r*/.CD+֊K/,1U*9t0 O`>Ba3ATsF\ڶ6V`yۦƞ? F(amekM% ĺ XӚXݫ"w]*Y@WYe Ka/6qRcp`[_=ǥٻ-ѡUV& 5#f[붲:n4ZiUD! 1&rbjZ״ =hd->1F7Ɠ3<~Pfn fD53SW~!ŠNCFek{8@ΣgnG)DͳӞb8P+nF_#Ӵb[3:88@@S!&&&rXgn՗YhOE8P!KH%sFٲ_!!WD6 fG9'__G'!%6$z~ P^XHP3?,̭~j0]27.<_t8·[OTc` ů.y~[^C﹅&  %z^Kh?67l>=X}4fmUwqh9 K}g#emvCԬ?X(I _b ӈP$5~-7sh.?fA ["e#!ՠOI6B?bqWuY}o]n #jeK#؈m;y!rҵV&*Ĺ{wKj5,~|$#i¹"cUJ,-5^nG)fy{{YZb كE_ NBrC" m|ȈCQb,-1DtC6NiRǫrtXI_&kJWPl9"zX)'霕RӜr9+HYq 9U_#+!E.Է'%ӂ~ /f7zDRTBF+.ףsZN}=GySH!L.·b̑n"Duipngfz>&r(X@W:֪cNO"XN= %c-~i"ylډ=}r-3jʩLt3&̇h7\;EgٟuϮ|qѕϓUt_OV]NɬB]"O6!ܻ2N"zX,HܶU}*ZFi 8w)P.ONJ.FJیKuDO+!wÅTHy9wxDTԔ0g n#N:˯S]?(U!tRsXmIfbCΦgz-Ďvdc59!ӯUczKÞ9C!쭂eHj'3&6܂C)sp$pf,lCLȔxOr:R_ .H˽0,aQ1VjX]IY܃ *(p ʠr1ܠ,ˡY=Ba<׽Bw=clY*t/z&r7 DxUtVq*2hم#FOTT{I,?;u"tlhڶ>ۑg 1a%4+T[@UhY@]Jg]+Ui0}v#es/<)+Եmf35(ϥk8B <HyoJq Ԣsi<02bl.5Q-'ɻ r=٠eq(S( OcV:l4)2f^80{gF_zJr)ܪIf[T0d%xk&Cd0Zwss7w>ph1|!]%KG2  nЯ.t&3K*{AEL'rѣ;d,CFr CcJF3w -όhHG] Dgphf1L:KW9cеÔL}^Myú5M_k'[+BHt sw+{eY@\82+DuqraBQ2X!M7jƛwT?ҍ ^}|G~8t5\Tö(n{`o.%˒o$lMe,94bX3ܝaR kLaݕ]+{z<;ВjZ_2 4!OKDZuMUP N5\ji:0yEC.YniUW6~8%CA'=k=<| rb'-Uʖul6){Z?;[W-J\1}4k>_s0$1dvUtVC՛mF(%fqbvleQ: iN_>޻4C0x9t:agwpyr%߳N&ׅݕN侣Ne6\bz솵MNޕqdЗ,^``;`q%PWܡ(ExU"GS'[ıf{]W95#{wýF*AA)}?owlQ!}3'6l6>hՓ[n慥:ya^Z)63Zvڒ~ bm`WKݧu fw5Cp(B?USDp~T$t#az5ѻs<8]@_I,`y}QZ_8Wk@\\~1\?2 ]`JR WXoَVxf(tp4B^fŸ>+:jWW 8&#_]W}hBIG@pCSҎ|-l@-[owwxɁm -)!DPdYț;j1,;njPBgvp|R$iWݨ0ƈ>^hĴeH_  kܶ̾ -Yqy&O*P5!Dƹ}TE҅_)HQT qFS-$%܀8`P k Z;l^8#mܫiC> x5.4 d x͌[pidyƴPJ6-GR΂b3*dfUReőG*3:neɍ6+l`5 3"BQqE” $zH әVJyDW5!Eӿ# B) -M7 4:Z QJrxNGAe#-MEh Ik76cQ } P)M,+Q쬉eS,+ '^zN8A"4!#OK%gZs2jkd2q-AwTKkcai)u̼ecXK zH{wR`EP~m'^밪&%a/p'=J͇LBDeȽLH7G|Y/$K1|F>Ju i~S"-{YdyriW<ϼDkCRYha +Hv6bEjJ;v=\LxivЪ j\P]˶"Rt]A-&n[1uEHzZ5p"S\G wDx5i?mj_AP먾cLۆը؁$.YxuCZQ:?*&$"88L(t5"6mI)UYdRJwC=İqfflQce:xd\򌝺@ARU^j~ XE9z$&=Y<94jA[g gSu#5)+}j%O롢a7~x}ը@ԯG߇>,>PE/[_u5 Ddjfݭg֟&3&;hI`^@:FHAmƁ&H b:h [/ D80S +Z]bXYj1 LmD p'@-S68(YIn^DR5*4rgu{_lszVIYCr'u\x:>051b/F?vr 7)-~gfoswn}0cR!,y͛bsP$f[~~|GT!%Vo2? t0FH1P6╂\%M:"l`_G|K]rh\풢H gn(l=8:P)B+r>S밶k18:b5>|W>Jc` pH*)Q QխVXk,2z8-QE:@kTJh׶Z—bЧ$P F b31S B̓EA AK SyMm #%c[)|qccwy>^fKt+#,),So}=BK_MLsWaNѠZj op)&l ;-4΋Zݛ­6${'`I &r Y/shʫ,1b`4NDO`Յދ7(ϣ,Wk"7oL|囋H=2˻hFM4_qU@3x 헅_)J}* TrzoJMr!vGM@urůl9^7bg2G0h/ aÊ ^-xA*u/w>4@>9KJºÅ\ ̞̏S4'?sT@T{XgDhwFedLFf5GPnN\QQ2Apr5GmAܻw=6qAm<lj+U LNP'u=Tc8jw SΑ҃38!?!rʨSKʴ〔A;3.6P#32Haՠ+Ē*klB5gjp!* мAέEltHqMQbʌHQ5tlf>BiA S fN(ZFd {T  p7!oczΔ-UL!MD&M$Nx쀎[pH3NU8ܡ3%Q lWNRb<7 1Ҙ0!&xL+פ(p<%gLi`QфM5f̶QB'!(cn5$&ekʓ +@gYU9WD$/B(UzETXQqK?_ZT8[迻!/g/j(5DN6TtnЄ\bs_7x*pTm''r.흑Ghap;3t30mV%%'1AuQ>ߊajHB=SLiIe׸ PLӂw0 +hlwC6~kq|H d~-n7jCm)?(SZ jm$ 5R1#.V|,CZf%re`sQƘJ [%'ă< CH戀 w$D; Ј|zy:/%85AyEq aŬWIxE 8="x-'1r}^DhSLEW&mmESfc~@6^)}F{^>֭ʞa{VЂW.ޏtn"mX~\-M'1Ԯ򧋫ᄍhR!^3@Vz](%v&z%UFf50WU'J h<geۃ hbH&ZZh؏Z%ÚǗ/Yt3<$WLH~{-1TO@ }b'LJtdzgnOjv >S 7w˯c?LꖏY:)_)d]~S P&r̒=c6De]Ī!>:Dl%a}-g $K1d5%ݧ$K1\9VݷquBPe ASOemt2P(* ^)xr>x)+pO%T Ư4UTUж+vbR`zʟÅ(ԢijCb0YkH`dٳ#͒=+ ׊<2! 8 i|{8c?r !un>D{Ci(%O騔 %baR>n)An1FJ|+w.;G+dUԧcL6x~eW <-vhޅNQR^݂|S0>҅漀R Śf΃NH&d\߹0ߌvM^aW5.qP+Sc b}/Vrꜫxsor8&+r]y+{SwG;O,;ۛ;UVҳVoZ.d{BwNgThzݚ'nmH 20쬬?|1.VhͤĤ*qO&ܠ8/&"4_x) )x39: ~+~k%%D u| E 7i7MR-zA/nlr eE-zkYqu_.fSm_&Y,MnY$_oZG׺>ѵֵOS(@:MEh&' F93TF5 sB311Ha((lo> WV=/AWF!dO5Zk-o;4kM2K\ U0vf{z 'zW`Z^_ofo\ ~~v~K9v~i6^_DHo>JŹƇ;hS6BHVJp0((U25ìk΅]`"F H eo 5eDqm@NRۈLP`\ MP~ B%`49D#;`Q3c}"Ք^.ueʒA A>j)M"_Tk\нՅG-'s`}\P'If7LJ4&xo+^q]B{)^@uOtX-wJb;Q }m6^5B(|<6 U푴<&ĵ\;oM1wή g ƀ,5<^v"}r%*&6<0{2*cŨ`ع0PۀQ!` 6@))9;0Epv{߮=݅s⠦R񚌟%(rƱ <!Dgi3TœT)YΐaH#S&e02_˷k ^VaL{el/>Oeζf6@1 ^ P"5ǂPNJbɔ5Rp@_NVءe3$5o R[j{#*ފpJU+#(fR TeKI9Agd 8RYpV:^ 9ũF* Աg(2CH!x.\eP\ FFT Ykd %JBR1_b-B2\ &8&ŁoלJ>$ a1)cMڼp8"k_@9nՁFL)RoER45{T8ȅ:&^H ̟Swju'*魪ygf>&N贄YTKAJօz  nlA%!hiC}̙:M(իlGD̅~Wmf'n[nοݪO0Җ&j,7 P){iW '̀Xۖӡ># zsZE 7mC֋]N1`8nQYmE%0;k籚Ar̢):h$] !ݣE 5>yJ@i,2hx\^ɱ|\Xry ,X!axmĝ@'[9Q|MΈVNz)* }i#}q+i 8EW92*V=b[p%u. z^frbz"M/N UwZ+,䨒|_,a?oFq$!ɧx(Lf/6/ l4w-(qb*%y*y5QTBhx͕6i!Z30qɣ H:)rr|m\$['9fv 10"GQ`֔vhD$\S@} syz\3@d񵝜"{bRb4!BDJN)*("s]AT!QZ1ʅTHb49/C \Jh9гAzɁ-/pfDkri!6eSI0NQ['ٷopc}飺ka)FBSpTka5H-خO)ъSm},Tbh}f#.߫M ^>d>8pgCxw1c[>@[3O~xujUg5bO S~8?` - c#C# 75V.ێyƘ!o&XCF :ԙbp'=h5KV\OQ IQd Jlk<.n=jQVZ{؞zKz&BJafd")"2(khtSh rUg{WֲT (uY}@Y.fjb=U%q|'Itye"횟xj0PsWAMY;d{Og2Lfz'xd"-U b@ b{]\[Ѽc~vmHuTK`SYc1RtMjYjmd[i:7GƂ!($Dm8k9w+8ndG'Z2j3ѹE[F1 >]4ܨ>m5ΩO7n|wL$sԭ^Tz[vKuy#[kWw'(gȮZYp 000w`OGV(%>fl9)q<O`8ȟℤH]C!B'(u]$q5߫s왐8;&<+;MEu0}8Z#~uԦp*cBiwިܾuZ0TlT\N%z[bk\ F!T:oVaw7x7$BǝFq)t3&ݏ=MS b=筸@&0(oK,v8l`A ڗ `)+M=%V|; l6Q+{u\~nݷE%@DHW+SGhCR3>)tl*5As1L(E}P>!8F&5@# :c]'z7F@#q92hP(zbz%}ʘxlyƘ!tmҏnq}MOg]GQ3oSz,dR !X)"V 1c6g /Lx7/Іam? a<"}6v1^VWf]0J1`Ka/!r^^B/́^BL(: x鍙*C;x `V:|}3nM[ˋM@tN hq&oJ%Ytq^ݠHt!3נ 겓?+|6MC[|ywq_Wݠnfw\Es;̤. HLd:)x,ϨEʜe5"L*3JD$b݅~]h>]w6VC)zb;lRj*ۤC|+3fo7RXeu;XYUn`?[ bHnZ|!v>ʂ ] i*HuR"N:0Ë8 V#.^]J#C%7' #~DEuz ш CJz4H HA@D 8ԕ&})Gh(&QVM(}T\JI{<('X4AyqΡ0&(9E;ij&v$P2ιrӳSѺ]0槛i;r>z4;c=n5pYa䱚a ǛW K\wvs`/!㝺{ ۑR_K;O10_}\[mh^L[8{?ޙb\nxy琷?D-Ow~9+TPRJ邒V )*()-Pdq!\4Q4j:]ʝe0%{Ck8ŌEVMfAiI}.^*HRR'1$ %0FdY2<&(x%B|-;ZF;ﺬ,IB xU(0;;9ռYYr*ZEGM)VթFvy9ݦ[历Z:4U`}q၍gwAͫt<_a `6T0/mv,Ա\hRʈńE:NL$mޣƍh}PBOP*tM5bPZH8l9AL+} NR*`F6{ acSBJ+gs98m;:(+=x)F@͊rc`冒 c|Jgta펹S `X ezS̓bKB }*V)[e{!2gm&싣6WūYB\L:f uYC_վwR//&%!/{$Xc$yils=/(ܒ͕O2/@þ+_fJ&1#77L@ DV,H{70e鹆µ|YmE[jF(\Ƙj`1#MYGyBpS_BE ҍ͌'9*Wu$^B:/+D_ a)([ۨTiV{c>qsu]:^pw -{_"WjpS)Dg ڎĵTLbZP*_w%Ivr멯RBQZ op)WYhG0\H.DVꛦRhvc NU|( R%Һ [ufaUހEgCӮ8GN9G^N9P( 2KwTȮl?֞ h%)eKP<àTw&Z)҅Ds fj]8Uws "HVcCHw"C\֯8E]E> 䊉%/Ahq <"\Jhu-WQH4w`2)?ϻ}`4}^G9Hsa#bxgs#H-\\;8 ӄ+SK(RXbйT[X,V8cHxB)ݠ*mn|{=5vf{g~kdd?]8|?}|UTNNyp="HhHcEKqJpN56U:l-X;SY*G4, HQݚ.Kz>.ߚk,tXx|=taO'^<سi&oH[BIЪDZfL*b7xz=$׋}Cph_R۬OHFZDLĕMƒ|u^v^]aFա V1qC-wIU0NBbRjp ;fSJ$[:l3z%Ow?ufu7rXVYIc|R.B RbcN' 7O& ji"-(b))Db5ي_.dA* 0Rgj`tq!3J)1_8<&|Oc_$3{ɿ$m]<ݳr387P/`>-ya6S;'`Kgy>t6_a8nSe{g?.3^`h{7DHLvX0 ͧ__㆗ˇCEP2c%*(C0d IT¬1i0V+M4FĜ%Y"b+N%ūdn+}C8=|1 嘋$"u) /TigXr5D\!ƅR5MoUfcPWV(҆,\^]_R>`Ync^fr+Gzq ׃f{ß9;gS 'O8I`[oMṣ9u28a6ue?-p7n*2kj$\&Rh 2#z9\~D)QD}덋˲{ɭ!篢-m< wF04@i ƫ'A%1P+-h޹$ܪ9݃wdAP+ yulpY*{RR~-xo_G<@嗇8SwY x2<;zV&U]V oi`dF%4A~V ?nEHzh$i-FZmexpc<-[ZyuV?yJaafa͎L(i9\^+r>h*]N}"E.?ڡ xmʍ5u28 ޳Of?0-hc]h!$zMaw" kΕ8RnIQȟRݼfhe+מ\s*^5ișh%*Dz:sY7uATmu;795V֭ 9s=X`} xXgW7l)7N KL%[6$:AcaQ̀8N# Hc:)SnЎ +|#U )}A\]@4,$O:l29HXj %tBeP%4QjߗPBf:?d[Dt*&?R+\R5o~o ;4p狊+"{NW̛UxYHѢ8ඬЪΕ/Yՙԇ˩(\Ξ*%:@ۇK|=!'>9<_wVt}pҋux2?OqUԶQx4$Cft>o@CE!6~'A9v'ȍȫM`IW0]4# 6Bj7 X_1mQ11w7|e:¯0wve $`:md(5tv!qMh(RE"ҊNNRi)Q ;!le=#p^zvIcD~:J>w6sv }7Qi:~"-mJbb؟e^cQ*2qLUYT]f#ЩU_8˔P7KBd`?0ٜ!qS v.11Fj2O#cLLeʫGaqtk 7LwvGDS{ėX&X菽{k.Bld|fÁqF] η$4k[ɷEM^LImbKb'Ml09Pr$F/2cbU-yS.Cr/9lvsW=3zF39cZ؀ֶE+QrL t4Qx6p2KCo6}əa^׽?gi&!O2]"_CcW\H3vؓAHȬAA⢎6Yp`t:6!%47Pmm}UYr eaLt&i048 Y sϿ,Bssmգڳ,m=G[c2f4:iEĶvṖEG G몖 }pjkE+[= Jir[kdR0"#s&[s=LF )y!푓Iɇq+*+X)DEAdȨͤqz)*c8Ɠuv'h6G?aHo &` &Zg ~?e E &nyi0'-'sOLX9--LUi5@GI71fѕG@dz2dž(=D%K>%Pi`LCM6WKeh m9t!@n)tàp{a+zǎ=mh#sK#RЏKV }ld{+[gt[>\=8HZ^Ih"ԧ"Afeߚ,aK>Juvqjs^n,a<(K1͠ItmF's[׫QJe,)3-ۢNkֆ^Z5Ɏs$w?1 ZԾŇ_:UɇE~IcR={h |lt-ϝf2oo/V|uo>V_~]ryKL}lc[V\_Urz+ ?iדx9.(`M1 '޽S@ RS쨚B38)Q ;4/\LWן_ׇܯɟ}GcpOz%mqWYbN3rߗ,s 2D؁4DG62^KT8Oj\֊eGmgXV,it62bŞЮT^cMO+DLR>=.r.)zX ٨HJGPZ@Ts%~:pYcx/- ̺׌,(3_f 0(j&Ln0Ddx~5 +E\_ua76e5Э_pԡǝ_E'BНZZ]21Q{u"PlT`W&Ԩ`>lh;e3P/?ٴs5yM\hڻW9-L>LeiWF{TS&5P_NQ5&sU]K&3}4*dƭ.Zn0`n6 r+l7Ge[O (R^Zkl${B[{JZ҄F+Yw xs+ +s-eC/MKxZh22+s-i*sH+i *5Hbd/(1g3 i~M#/Bɝz +,Dѫ{.;s3rz.HP͟4t.< Ym4Qxš3OtЎ^6U.k'MMeS8dž/4\(,(S *XԒS2y]y$?+~[N]u/bUT2 TT}J˒槸G7xY]X*<ͻ+ŪJV)Y4с뇇Lɱ0df[y jKDIA%}(cD. 2˞H@fN%18r%;î{= nl>cmj) l~)9v1:ZٹV*04₊8Y bN\G^e"j5*ajۄYEs4k:pc=ZƛF c)5=NR8Ij>TK$k0,?ǪA ,UzrB^H:m. S V%O8#2{>Afrs6Vx1)K~6VN&7mͺMb-^(˔.(GEi23 /ӫ?'} Iy'ׇ4H#H?ZR?͖ Љܓ"Q.׋)_0  TɇWHpmji{@$*b Ok"7 bn~Mm͎_ܴ7lY`fEbv/&:F'W^JKuk IqlzR'cFKobBK 9K8v4\MMGi-Z:@$*q5Eѡ"T1mk`Y!}ęR58}yљ9F` FT ϕkT+4R̭Ts بZ훂Tj3"S * qՑ!TC$ }TU1if3)\v;R%N eg0Π NSA' :ߝ/* xW a4ISQY|&rӾEȠe263ܓt_Qv6^Q3yeQ)И1pى=)tyzK'm鄲ڈ1W1z*n(O>)M2͜>k:膤*plo:,%+BrfKhXU'z|nCqj GUwi  [Q4ɨ|wGgpa#qZ*]~wun>vL"KOL+٭xZ{Wx˰)¬Z:iw3`7+72v9SaV*6y_ܶ{xnV2٭:Qɋ,bh:SS)M "ig` V5.TD(fqwjsi#3_|=pCKb5B4H9&+ iBfeiz23:]f`TnxN;46%EȽI9dm; Ptv/sl;n/rUJUu**V!ڦHO4AYYQ^f՘J5$5nBCk&+>=gBP *F0fHٙ\*,fӏv,%%xa0yOk0FP+z>ZmΓ'$ɣڀDӬIQ\Bto1k|i9ɣ:LYf&::"%;g.ua#K%I۟՗֠ ~|S$qVƋ]4?|~Bh`Xq{R,7l.s>}.$x:B uo>hu +WAe뢾̓_>8vd֕_y@[6;eŇBչ8B$X׫ta/_ݥbiݠtTǹкƙhXm >͊6̪63ÚY KQccxСf[ձԁ+Y#)DcjAm2uhm?s?rKT`&N>܆0S+N˔h*D}4U㌭ִij)jR (xeNCm@FY!=ƒQ g'lV&R{RN8(r]$`]aĊ5dTZTm9cRE [XK4nE[qPȜN%"sS$#TH-y}@ MNXв/w6zp|mq>m~]}&&\՛-;Y7иt_ >/{f҃]/>fdR)>,^?1[1i[Z؝a[vZU.T(sN1$^I#RNxw[~Gv23P墍Mg͛58Ol[~ҼkOp.FA8؈䏢=|)G<,ZUo~Ib% ΀:2:rghfyo;:|n 0 b?'XF3hlF7JU ka<ع6z[vv0lA ȉmVRv0}W{)@X`*יu=zK,uisԎ:>)YGKkal ݨ5?{WF C/3E>];bbw;3;/P*I6u"%("P-IPy! $޸xXoxf>ۉ_@_XC ܵ}X@E;V:̚ ͏ƟNۗ,;%u<-~ȇ<~5G5f~ܳ1Yh_Dl_y7؎» trǻ=]u*[~1`w+a!߹fٔ>wSՈ [ RL'}SfJq|0$ѻDl&s~TK5~NDsz; &Zzzs?n.ţP7f~3}5~v_-߽μLdBFY/aa},Z/aeBvaNk#2XREc&7p_5^i= :O)R΂l܂ iXQ!L@Y]զqZדϰM&L(zE]uΔCXϛmyaKŻXMz *~ OQ<x# rh>y&I3>lχ۩G}1Юu=hvgw%f!Gآӝo)v'O`|{=NdLIAY%;K z@-Hrr%Q#q^[}0k8ɚP="1ul0hv\0jNs E:[{;l)ⰿiMw}HT{\ 901H*c/)#t-}Lc%T9[1]!UZ̉J0/%b2D홫%иB*5;J0T\#ҭyGͧPyw,̼e1l9bH Gc^m%ۙOyȼK!4/#FvOέ^"x=6mqD1Z-V]i/樤]uZ#r6 B1>Fs)ڷ\5i]hO/B%Aép4 2 (CH>F yNnv1RaJ1iur5z$$duH? ܗ:h :pv6-.c.^ %IȉS~$ RX!AFi!?\F^ beJP)_l2[]s~Zu7/gc }^JI܄P--VKk"1.H#HHngֹPM@7L/1Þ^OWcs0|~fH;ZB\@6ZUki=$V0v+CЭJ[H?5+z85%i:0;7Gpџ G:Qz l.ڨ˿_2Q3nWV/rPI9Ԭ|F !$"lLx+pVXa/K*#,$$<*vv6pχII [maְ7(âwXbkMlrʆ< yO*|g0NaF8mr7IH6x,S(BLZYcxAo^-,"C-1zbk`2ut -_JվpTۃ\(*X”*?ミ)@ (J.Ecŵ9 kn 1%S"aXPXyMP\P@$Y \Z#&6xak=>u4"\4f<.6'y[˧Gv~7Ht7d,8g|ww??+z6^چ2"\޼9! ᪽k'Fp|oLMxĊGo{9 s&hbfcǂ3Dib[.LOFK<^Dqxb,Z OOix7 K,ʓ Rr`y^u߇T +1 ̶O]擘f˱* k^8sҏ'TCrΆ #`aT#먂P]>62U*͎?e"+]:^ xj&$\XۙO\f_ M" mfPlX""4OUD1LfHG SƇAB`&&אQ*haQn(Ff$=ņMC'Is**/zTc'riFP]J @jBR ˹/ʺցw1MfE8jd 5&0' f)-_kx0F0׊ B:մڥ9O{LLdA4%C-I>;')⚩!FME!b^,[&jk-{N& `:#1D11B6ԲvjD {ND1%jق2= 'T}~O1rt!Z?w@mRAw >'E$Bk/:gnUTs̉ G= *n">M([(,o9o'fY^%?d0?mn}ǂ8+AgYA'{-&,!~^Z-C~ Mv+?\/>8 *<,;7,#&x [ RL'}ӅQ ̻W%zhM ucZ Oreb:ݞ2U7w˿JXwn{۔ z+e2M'4Uk_K2B87ľ5L\:V4HR*Ʊ_k!gR͵R4;9{@!NQ҇ve[] cH-D\4~>A{爜p:/~%YpTɸ51˕jgSi B-X w Ņgi: h!upնz6U--X.IO[h b㉆BsVٻ9nW=!"˯ F⵴֮d@{g%wvgžG,EWo5ZS9nswC| Wo0paq)^=.돯V??ކ?>^J]$қh\r<;cy/fEջHO, .OS܉Yx7K%v9G)J#j|o?K)T,0 Rs!^6ChRLou1 jYYPJ=7o4YQd~x#^vw?w{w7Ǿ:}N8 W]»=rSb\#A{*&,nLFi?V8_) +>͢]CΏt~c -( $ 5Lx`mO8zitLeTl1YQ`;1.YV ӊ&-]߼uDEGV2}:%/PmsA_Ol4~ k򰩋'^Qmpa{"Z.>E]f`EU)R2*E+Zyd%LNd#ᝲ!<*f;cdq$ޞ A`őxT}׽mmuoۯ=gc Q[gh 9gG4Z$S*֡xs?4k(܏O6r,wui\j-ӀJ%畛XWas * )u JJh.q#q{S0Ċ Vًԕ[4fp<]4dSwj" yg)dvנ:81VF.UvY)u6MTecCPj8Ӫ6џ!P_џu7+% 36^Vы/luu_ ĤL"z)YLXlzЌ=F.,;lu;/yg)Qgf¨f~LJSw=aQg}/QU~}b,!uY^+D&} Q>^;}GZ#'_|e$sQq"ʼw0wiwŝ䖒/onKxވ>qK֯Ɛ(c ?:)Vo+v&U KO#8m`=LՀǰeo1|@a zePapE[F/Y{{ K&p3z_2/ -ܑz7BGLmˢѓߒB/9.F/]V4{=#<k'X3i]d6wb7[mUt&{ E'= @-W9릞9 Cgf<>[/,XvjӲ Ikt2K^wȏ9s(zAHG8Amtȏ!>]>qduo n ۛ{7%69Z$WZ>~({{sqE?_\ /Wi$bq_SM?p#ӍB_Z3.gǝ:usAQRkjT[y88Ou^IGSӴ]&M>r&8^ĺ`=oeG>ťEO|RUԠh+H9V=3:e="kѪ.+%|}/-<( mĂK]˺r.=+ j,!xgދ5c"QgK/j{#y^t݋ng UAw__N$hq4-e` +YQAɁg9rdrʅ2(r9lBV&B@#;N"*?r͵V KH*c@cvTe(hTl͚M{x^9kywYߵ׻*x谁֓͘ڷv>xQŻ>>/ 7wɪaX vF_K=jlceI7 }I8n A!f'DSȲm[8N):϶vچ^StO-o8?`c#=p! ^#6.Q'"^7ÇV6StChh;-N '~Q rB #[uDžC+=8S}~y :9^v ˙8\>| @qY#gaX鰓$ n:XEZӦc}|K]狏OSjs|Y)ل>ECY .V,w8ZbtgW@̌u%Tw?-[QQHiGIfkc[Y'zvp7o\I Gbϊ =ɞL7c=- Hp+&`ҨO51we8jŚ]Л-iw4$8=yBG'Gt+R8IUz1K6#v:"~ye/3뢁ns<~;c`' >~($><"zWNEY dd.bf)ˁurf׹=qM`ʨsQ{X!?Xz a #F`qB!LpZU\DtVߕb(9|&{W&I·Bth#7)7uZ"Ӥߦ㋿!Bh6 'Bו59TŗWUzET)7gFo Q".apE 3YT%EeJ3#@Fc;#nx[cmBrX}%@*)bօ*rIu?´2u(W@P*6 9ǜ#q(`]?T86u"F9xf4 S%,yYCGh#;ݑxVl;1X^)mpס=cf:3Z#Cb"o#cȊG|;>R4:OoM!f;/rϻ mzܮGV*u,$;%II.!$A )$yDp3vJOmDEKrL FgYY{ :JkI}phǤ1OiAnՌȫ>]5H ,BbVKQHPjAxm'Rr9;S5@A5D!XNj\5#ƅXNwF; bؾ`ɑ"KUD 3Te"ro*L:F{_&bSTz@Čg3`rv C hOKJ0 OrKG?ӻý",*DWlJr?^G3]|sf3t3Vj8}U$g "v7n(SAv=$H~c;xQBWXEE2#1:C *5 c&|9T2W1Fact$S}F'y :v ̦\H-o84yaiT19:N;IsDz;TYٗXO3%ֻDFsb$eE 7 Zް$۶Ä{H/l:{jPUNO3%FDK>3D:ՕcZ 0KE*qW %FFuC߀Kmxã̻UsghՊ~u@5UKU+Jّ"@[MG|i"p_yeEXO7?IWF!@pb{TsDN7 -ޡ ۓwhO514m=ק-p+mӨV'cN%<#mIO+N낢Ү쉴SCM4 4Z865D´`y9^f5 ^l+TmnshEn;˰ާ$Czo&bIZ#zwLcu4G;^ 2NB>08?yvYyX߭m!N+ c4l3|LZp3Tcinp>ܵt7\!^ sYR3l:U; $AݳˏDNmπ֩%uv/PG>hv/Mu1.kiBaX] =YK"Pa,Ztp@mob(veK:5vڐlo7T-kXP%HkVܭ]@Out/w[)9vA$UuжkgN ?:B[@+er1GSȰ11"b,߼ 1Z/ˆ.6}FA'bo)cIv5; ,c=茧VԱ}5usPtd-zvh;CF Ο2mFtݺ`z/gv-f؟Vc/4KF#䝲(hL qc~ӷ{OA1Xdj|Eh_ kAFg c!7|p{2{<9mTM~F9;6m'3ٸClLڀ;:QԔ N-^k}2 :q 64E[ʖۓ=[4(5݀ܥ?wi4=(uGBR#4jISͭ24Nk7.Y%䙓koSk,jB6eo;i괦pHƖHrVGM/`tx Rc K?{F9{Z!Y". 6/gw|[YCJӖ2[lQv.W,/_/"D-by\j%Fk4qnMy>PՈB%֬ٛ[SL|u;; ő5<1Ύ{&P:#S7ّeB$6 3ku[3ԙȨBd> E N+UhlT3]zƦe+w^їz1$Lc/_㗮Q I0:S8Ed#/29'gG`Qq7M l,V4Rec,|KŜbh(eDbF-kT) R6\ %E3BA&z=ZI1BiFkJfc)+l.Y(C:Aug[R{}'}V5j1#'jxlgVFN'ssIC49sHߜNϠC)+ft5dBs9G,sJz,gp^s=b1C4Gg lQrtb ( uu/f4l3ꀓ$*NTaK]%+t *:j@ 'CIbpҨz2heވNӻw}:G"]cdpNeW>϶DWDA);w xz89q4W<wTTAأG!d5&;PפʵȄkInz<'mC]`<2 Try<d2~y@uI J] jQY.Zp#A46Bc$})lLKgQdB6 W/!/ ]wl_vLIEH ɿ&PQ&kMδ#Lښ<0ώ3N jMi/ v^H|!℉䅜F&2}AOX|HV?>: 0Rt LieԔI +KbNIj^J*m!'qJdNhT"*,j#Ԯ4%st,Kfmi+וU%-u2yZn!jEsIP hjHz`wzP%B|.QK= nO̽֩!•qd^)-Rу6ƲRFWzx -GGiUErAw>(5ޞ  &j(woP;^$nRעFb (8`bIMFJJj]Y)jk*]:ǍR U3JARه1u tk z]G JQS9\jI 06iS9&* VU-mi /*[-k$NZt Y`]%0)Qr?I)%C$ O`H+?DWD>ť7åVHqOl-΂R;e־;:5+#ԥP+n*-u$19Z\7RvQIwma) e@K"wYmH >p6޶ RQpp!+BZmd~_-Zm:OT<@R R "Y*6;B[#9>7J#wq"o4yG֒zNI!oG)]ȷytI+I""J!qvd/ɴB =[} 9]{hi p☟A(܇+1½~z 6Ѩc)k1pwAQ'6-5ۏp۰N8I"'Q25 _NuvӀ@{)K t{Цr"SF FGjLݻ>Oe/&nOՕ}(*XIk-DdΕ:$ކ>`)()w;q;;"Gn#^Iob,v׫뺋}G[.~"*n*6\b~Hso~\_!ŗLrHt]hI2nr#nߏ0n0FdJwrjV%c* XI:St|94bI=\~]79vE*|9mQ1݅r~{^+o~^\x;^)͍B"RcyBZgW(J{wW#߂{{^)(Ɉ׫ᵢyi̢qaqD_&4^:Մ.ZGX_½ӎ}oy(1wV溸u˻{ϼb-q3ηFNe6Ij]A,Ӗ1aM6Vdw,=mf:U04\TM1d>I d~yڀ:.ꥥnӱ服t:'E#L3t:1$RP%H֨0yR?ӖUP슢MG:`N~,@kN𹀫 c ӑqԀjv:DިEnB CihpJ‰&E,2y@OSDŽPPCGӁݨ % H*"{T*{zH>hL` ܛ{ ߑjȬ =~o/+cQۥ={Ls0?7MzB8.4RYɝ6Td+m5LQ mз娑\Pʪi;fڊ}{Ef!%>"Ttysrgx-H 9j+ >OE "TԖY,Y]ҫIC=ʕPqٖOtR/V}bƅPa5o|k?5nHN,o陽[éڷԧ~o;:W@b(r3voXB]5~V2`eMAhќH|;rwޱL wJq#J]7Gg}FH(AJFmjo-ӔI= ̿ (YOQ;4VF} F#'qT~2>ۖ62ħWѨH6<TtgRSSӶR⬔9ጵy_-g={~>rtw}eŅ|(&E;OGSpf1I}-5PQZFP n뭼52_8>v^>gYN/[W-80hu2N8fwo~5ͫ߆6wow#;s.hTZmyW3KJwPo"UCwuz.gs?QBU܀w@ dNG b_SjNXLy} ,O}i:*`u%kYj^|g9N;Yb!NGuB}:#G=2GnHuY=GSJi?7]"sGQ:1d~??cY"Y|juCNwKXӍ0Gg}u-Hw_ίʛ˹\ 0#l}Hc[!Rm.\0,d0]"eڄ;M0)2hiсndCe[aѱ+K" ֕R'GJ]c8s 4ۏ3[m`GF|,z"y39;B,'Ϸ½CWVs {ЊO~qFX32vwiwӣú[#Y 5*sr ό>a0'tbёqbQ=sk@$/Z~bmfW65ྞWz)P.ƦԑzWXS*Gr[9y^6M5lUukIu]q/pN(+xY^LHVd."SI0[O\_L)xoMq0Og{%02Toeg=Ԁ=ON'JO2NIL\3X$&{U'{|_ o…0QUΝ?~K{rwzuqEe 5ǖɏ2dÇcg̋.el[h$ۂH)yHk3|YzOyK[)yk{>0!k& =|.שSC7od'MU:۬& dR u\}g1\dd[n7#G7 }n7w!m:ϯt~{s3ח2Eŷ.yvu-/|~Zżr \rfTiB]aCJR,/[ɍsS^< su{^^vof-9,!;o'#'80\/d`>Y?winJeKB E ?Ck¹*^rO:Ø,62TCG}X$_0xH>9nߐ²JI`<(GEɘJ9]7ܑnxnNH:54 kֲ)~I`i )2@I8!}(x#b Flc')Ü^O^C(~ $})x /06 jGV6o;?[7_mטRV[0եYnLȊ*U-BXKY*Q3*ܰ':%,uʪQZpυ:0GK4eUQ f *CwTzlNB%cAk!5c<ʍv,}RVssEd9U{`9ўH%ĕ^ B(ˇP C9{+YiwWMmyv9KqUp)y>ẃÒjQWhHUAӮ4ʠ:\j흛 IWUA),֥QDY̏r1M1N !M\'%kiU^FCPum%;BbU1R|]Ӹy|̚<Jj&Tkriگ4k0駷ͅ۰Q{wu/tK8B9EqtT>Ƅϭ=_5GQ`!eQ um'߯[UX9Ǽٻ6ndWXz3W!e{fKR`$%)>[1CC G]GtFїs'"%xaVWV'Beh>2kޗVҝIޘX' %lN%x{yn{G厞!DQw{'yJW~\i^s Zw' ['qe}ӒVVKR ,1q_+nW8X.]-ȓ"Dљ[6wmӖ1^Xەak/ ѽ>Ν!U'L\!'쉌#j[wJΓy 9eiO@ʕv"©U$-jB:cxnѫca`oРs;u5 gcMll~Wׅuu]XWׅuu]4]]M"txIdP oJcS2/3U!epHKd@ƃ[j>U~77:MЁ la~T0ş՘i ZHp+9cJhu]dkOI)Nc/-{aղV^4lsKK 4mi^,-,$ a@L9|EVxt^|-פ1$ZjvQ^m0c&r"*a\ Lͳ QZ. J\8`YRM2Y 6<}SyJ, gu4wSpPD(NᾞW(Xr03a!9I"JJ "ԖӛQuB~hʩYe}5O%VGFWWU(B߅LS!Ee\:Bf$ ,lz6<#C VSU`dDeIe0x;dYx' vըx4{-v~FF:Ȑ",RY0˂f*hʑ@%E\SBˌVOAX^,`? 700#OGl|^]-%gXLj]KGM/Hzj)AzQ["PlBK\Di6 !F(CAPOާ 0)B|9$ ^ #{7TP`u!B|cw̜]d֪q9?q9ݧlS)svz? ߞVg0(Y{{!w͚_(&W&ϯuMa~B"k dBQp#%A2*?wfa(>JنDb~ՀkwFƎ`K$[U#?C0 8]2T`UM`-;FmD*Ku7\ `ޏM{[v u07Ώoө# ;oIF#EK1lNwB@ѾƸϓE&#k{zK+ˏsG67n %V:X'N>NI%'~i=[,тkW<c2Y9pq|w9?B 7}TZV ͲOwiB+qUB5:Gq}ݯH9r۵\FSn`\LY{s??C/g2r(jӹC8*9œ?/P\!^I\>X;,$#ƍeaFʯ|(ZsupJ!}dǻt5&ZX?8C&`T-:T1?|t&kEx߇'V+T0:+^5m^9,좁0\q,GygDUu3ݪ9`.&/<ͺIhc3//n̬ؠyot*`jsǙ]vnjc}o?/s- s>~) ^q Gfހb$!\Dd>DRW}ngr5%RcBBK>k;ecKRP+LoZ:Z9T:l]N37lqaZ 9ղQxUfƔB(dRǥxyNRK.9"&;.%Eˁ+EQΩ "7W;tRǢ a:mEFa $OWE$ P_`9l_pc#&(HKIFvܦ(r6Ib0olm6N!_ký%C.NE1(f>RK&l>"RО{6Jof33 IvuySO?jVEI761پm^s|_e,֙5PwK9fs1rƈn7o1I=櫒uwWsp:O/n6]M=Hw5oh-wyBJ\85U|Tk[\>9hˡ;ljIz(ծcM؅#c"D}Z|;΅[a;ck2/Y)wG>#݀z'½QO32G3sG(|ß* 5:h/O8Q:BP^_ya(9*(ť䈦iJH 4c2C`apʄJߏ+[j-3~şfO0u!1 .))3v5JVb{~G@5特`\(@/(YSRw> NjOtl7 5?Y^X |=揱!43m根6u'X }ɀ fuyz$BlE@H:&a$?,@G"6Ҩv,X &ҸIֽO#(QVD C%,QG5EfYX 6@WVDe'@XW1I )0&* ,A#)9:6GĦNGj#utp¤dۂCj?|kDP,Wc/<8T﬏pko_]u3}z[\ɬ=YsvwxY|1҇S>D`WeH%ncvtbt2ecSo{xM^j̝tF1M˰ݺ2__߂ͨlUD}f"ky0h5o/GgE^`PRL$ g lB'ps)8psNׇv.}#j%Ra~ӮxpB{d%qw F%8!U"PEϑϰYV{ix/)9E)ӾGBTʖ=-&P[C/QПL^@g{sZEUݜuRʭ¥t92X8l fQb#ħс&-ݓmSȹO~# )Fpľ9Ģlh6o} sK+-8@j>T:զ`S@D2ЩLp3\02KdT$'IJo5"H`2C%'X\Jd@eZB`Ls1Z ,ʒRRyZdY[]l+qw{a& BxFxhb_<68(B%kbőns ;~:1܉` 54esʵœ 0E,d8s ue)AВ`Tni_n=k0b/Ox)P`ɥ*EaÝyRm$%`Es,L)Vyɒ'%%CxS Vw4| b#n{f,?٤,V`]/z_i'dEo7jmMVѮ9L jVმciw uOwihgRDVkөuÛ93S;UfpIbiL`IKD?z1V7*&N=?(^EUX#zX1y91z!jpe!HAR?x_ArݲBxO! }2J5*Gr_C,m!׵,gb#s(6ԝpwv#O/܈naW7b=HnD/D ª#g~ c+.9.q>:;~swMst1WWkzgrkݯfv0rpwMLkiPJ'ix[ Biׂ{gk\/8j0oʆ.n%$FkPU"3eIR2ψK)xKCVG„ AQ#*ӰFmP7n#E/wh}w+{e@F~Dg2~ R$E*#E4~h4Я=̑ ~|Ô`$7Z ''B)qsaᴯG7ȯOIhjR&x 3!0:e+%\9X+b$刁\VT$tAe`,EnuJap** +B-8,R9m6=]=f]E>ni,usa6.c]%$Fo6%L3D5ԫl+cJi)pzO1dTr*z$RTٕT|R>ZJIt%zA{GGJ. xڀ7&{sw;Onܘߛ̞?xX`OJzʀea&Ӈ=,g sLdPRkkg|bEv?Z,w / .ajŕl|WӻeȒ' *}9=7>#[@kꁕ1b8ERX1A"B0A[;߼$g;'95u^ +M@zߴSO^,5| 2zd|3v^)'rg7xp[[# z%vFc=##R^ W)쮨c~\j&U6U=3b.CAL<NTe5,,h,B8_e-nh}Lri3Nȭݕ&AVdq=2Op#fwoZb)!r ^(O^(b"7ԍ.5xv׍%z=<b儾Dnˍ1Z!l2]",BfK% Tk=ݥ3۰\e&-Or83بTXd3Ӝ)XH+D0(_Eaثe{.Af]zn;"s~ @K]"'4s- D}B\(2B'p4C&ȩ(k.zSJaK#,{r0W%4le M|ժrMe髧;n2)KQ\OݱaܥB'Q;5Y8OL!e>Z7Ճ'Wt<=g?Bi0JJ6^XӃzhNGV`V1815#扑˽Wvkl|0Z 8zv1 ZJ}4zT_mR-TM tUg(JG@%i@`(H"Q{hR36gnb`ݺ YcLfl$4+qðBx^[c3-ˢ{7Xo7U;걷7}&UsSgl_t.~+T]uS񰮭1M0R~扻<׷yuҽQot/i}Ӝ/N{/3#xW59 nsN;C7tBZepqh ᖦ9Q*+vv oi]/?ueYbϺ*Ö8Q n cLT6xÇu w$Lj݆kƘ'q5[laXkՙ1\z4:.h1mcB!ziLO'(+c5KU(p|Z,SrshTJչe)$$Pe ѱcGu/ߐ&wc2d jNE.lL4#ʕҕ ^ ELB)`5/ebt5ݕc]z"6+/r4[cvP$_>~ܔQ=uJZ Qejk-fv~]EUu?`ˁL|W1UZA$S$H*HsE]z}j\L+rS%"S:ʦ2hK-@ZZpkɘzexƮ<l^(»ZDpjJv#Dط^iv{TPa:U+DsWiSBq4Nz+1qz䰉hA6vj9C6K <4k i_7eɇg:,VifQ|gp0i#nhcL00!0cz-(n8\{".-Ɩf.p"'\xw7Z$`2ǭbb H+n^3f؉5@Nr>u}DGRvFM<:i:c B8@e䠅ҴVRɂQTѼئa)J 7aPlz_/ypdpFLʌU3(5e$9͹*2BVgYmEFb9J- =/,:|]dJ 0#M3,@$bbISjPQXFHwD na=1 L*7vˢ'_@ d/]5+C!s L+ !m * ~^#\P,K ad bSR4/f5?yY*o_,g펥K 9#2qr;akYɫ2: a&a& c0L&~$sMZ3l1)x)Zܯ_ vA`'qB;60wFDi h3X&3Rf6Y+z*ǻ^ F.t _KNâ=cvI_@Ԗz^2(A< oAozj< 'pባ#Z 6MDx;)^-\[/򐮅ҩ'4ZtG604LvV˽*<%T%oo7ǮO>Mn 0x2[P{9ι8BaznWv'ͥ6m>sEe0;b08?}An6xZ(5gzcۻ7|oн| X`iM%Щ9{f gs6$lZz0BM [wb?ۛ/vEuK(jЋZږ~t T+Jb'F ֥kA?_O85BJCtyvUk}U!WT;wȊj0zJy3梘݌>;]RV/Nр7sv0Zxe>Dc6|W5(;FEЅ{y~g/z-hhشs/(_J$L^,`.xOlhHB޸{v#^Nb11B.S;n5z<[E4FXŹ8vGtbn'&|9gq LL E,̥ jY-́S)Tw-@ͅbg Dh("sWG*vOp8"$zcnͧGY#e~l|ڻ}2Q f38i&3N9(B20"w_5}6!+rs|Fr)aNi*:gz#hޞh±+:/`r_f)rN+gY=|:4>TWR\nvj=afN>TkgAGs,[!{[D<}g' ~{Q`5aJW~_A Q^{!yW ё`ꈹ"1Qqgqn4c!R,Gu4T_mR uP/[as''!!Q}IK@^ z9oxy6&uK`җ(~}vsNN{!XIiI5ƵAJ_O)%"TZXTl6E 36Vq1Ŧ+aU9H0U)RReTC4ɥĭQ"@wZ_1a9|?̇b] fn{@[%Nb˔tF-0IlJ<X'IZoSXڶ]l&IȨ$AOADL*kBMI@OoS䁚0ZDawѣLJ3%d4BȻq}Ƒw4NSζ$ry4e1Zs:i5_{zAeXs?Ci"b,)&:6AM&c /(~Ago;[)Iv,_$dQ.aRZZA5RԆQVƐڦWcZCa婐u{9eR+XUkx[$rw8M:}%ƨH1a<3C\7C!n B# A$t!чW.LI|CJ &w!6̺]1ʴMPpG|x>'oLt)S̆rô^S>:@sTSO5|rh9L xE";e"P|?S}$C?xJON泸Hkx JX*WpIPHR!lmϔmuRv}-̓C8ϒJ!IE68KD\ I5ĭRz")o]yUH&P^խ'MR,4Vz)ppJ4#8aqÔeAIU/Z[k-eh t5'K%FĄL_SmL?}-4.SwM]$blL~7W)=]O#G͟J+ϱe4˩;Whg5tEVH3rSP+wNS_,W'fsy^vcS_x<8_%l["mrjUfޒ!35GN~SIl̫T?Cs+x Mc@|C%qSgi"R5|_ۈ x P!&Pa(4e "] #v +'bUH䷂/ ja!-o鬻/]R>Ƿ+tf'0搆As ~%!j(¿?-zh^S)P⚀O6)hKʴ?+Bkfi Cp;S? +F&XHpJf2b8." _SHpJH*6 "fdb'jۯÈ!Uv1U >8 W go;D~YXq0rz /0DFT`E˄G0t9!S8 T0z#Icu!ׇ$=& g 1c^Sǁ ι(e>RhF+s.q.‡BbxS9bzyor 3  |>~K6:Y.a~G̥?ؾ-"ES"wߏdF#8+ޭT'3S:kmbN5o@7ndF|>3Z?1D!|`oAZћ 7NO5 cz5fpz>ׯ2::0H}lH$czNy)I6QEL8؉*TTqvUYRoZOnH] 5K)FJM;IP3QjS?RzR BlvBJeK[J1RIwڔR4د-rŧ);qO(K5_\J!dʪƎY/bv;&tL!uPK~ ԗ-_eXk9,P[(2 '(AVh\\;8<8uP"{7AT Msw!Y_;EkPZ " BPVO/-.E>vܖi){( 8osiUrsCvgMugƧ;RSsAY &+xQ=NF;T@K֥){ND샍| ;gqg8ѥ|l2t*Me< rE`KF56vjf̀'Qݷh}M$àtyϋ //K~Cw࠼Ĵj9P=7ʯT5K3o|4aK=7䗥3C( 'y_T0Ga7n|5wwkxb0} :ao Nx%XZvq|Ok>գ @Z]6톈OUie96Ԓ-nڸOpC0c hb=*ŅHhKդDxaʚStJ%+0.$HVnu#.%l@(NFGZmlOV +*wV3@=doN.`C6Ce"@M"nȿsnoD.=˨I4Qas) HPe~&Z-6;/zԯ ɫN ۹:!? 0eLZ]Nl XLz,ԛl-C)1!ooz1q'Z;5d!~ bvk:5a8h:JuǾ;VBNni/+Jx-kOҘSON>Tzt 0vR.q4pKX*Wap PS"=VD;E-j9 5ڨWב)C74:B5^f>/ާ:T^17ƦQCpuj^JMx8,t >Lb& oo6bwB÷7rs%GBc@K`\>('^ %A5#< 24 e#`#rb7Aw#x?yTө1KzP/zWkownk=u`Tp#x@$4P.5Yޖ^~]u)J֨]ry">`Y_veX{6LvfNIcZ )KU }MJ j|*zu^n⠋E(:ֈ*/V r%]>s%%dT+B;2,v%y%㲂xA(ghtRsL4|QQyꢢ 0D5:)(~W.U!F5<R) T&LT{iˇڧW (eRQbϗQ]Or9Lv~T% N|qy V`c۶mO7.™M:"1Θ{Oi?yeO;}Sܙ -QuFLSN7$}ЩpL__|׽x,s:L?(erV|"ZF0_I^j7ʈТv+)G@cJYcڭ+vBBq-%Sqn*FtQDy8(Xv+Vnݪo\D32e鋋D\u#s^գ,Rчa^u&R*p'A (wS^c4˽|K7z|{sn[cD<v'I)lЍ)Jְ{RKnDvCJe<L5_H\J-MrX)X7)gK BgWF&Q8`)W/QW_LU4,nR'Ҫ6>#ue*J*#T H$il4GH8heT9W`a}y_r֏iFt `I3q 5˵Mq9gO!pdz7I+,W'S'єKNR.>`[bT%FEFIW'GDÒt tPc29٤np8w<8%He@ TIrjvl6zSDz| <@P}ų~_ R'0d0OB* !gogvΔ5ohr[QŸ'\ 4v&a H*6ڟV,Fto9 A͕\i e7heY*<Υ力vϵ_APPi5r;Li#O6q!= KP]Ĩ&|/Sds@>U _$w8L. @m甩9ŹǦ yB.a͑M ٟ:E./~X"yC7(Ll@.VpG_oF'7wkaxzR&UlyFCz _~nNf$ kƚ?y].pz2nUogݛqզjtn>n}Yj|GU8yR&dnqu͡ovbrfhz'gҭVzNZ˺d쩋Vٮg0ydo]=ޓ$Wݙa3^v<Ȓ֒& _5II$MYM5u8q$j]̾TA-?e$̟><\4m E|>pha7 .bZared):StUcB {bH5;WeLeх <ddOaыajO( ٛ `΁O@flwvkU jf"`γ zw aT\} +cOk'nf`ϗS ,Gl1q8Pd2˒@3i?@^q4|\r?d&v._43d:'2Pr<83>1׹-4/e䅍-/q:fr[}f,BK>o g`Mf|~?OV_t .f̪}b&"--1ֈ!|LXKEiJ\D El9 gŴ`g.g(=kY@) 0;/1ZEu&~` q D.q!Ss#J3wVopՅ#EXW$/]e -e;;۹ΞUTgQbjOemNӻ\ ,UHV C0ÔJ 9CB4IqBEpiS74oOIӎ|#ǪN֍ܛ_|H66 r0a4c5s%Fi1LejRFH;TTTTjx-0F1888AK"%T,倫fDY``➵؊Eob֥чDtd6N0&R!p҈8VKs)"#0Pl=;Q1lByP $e,?7\ӕGv}y{bt/Y~ _LXRՀKrt9 f$?ݚwo L "˾ֿ!{'yW L NH۽IV+YO\3-|抁p\`AhJA{3 ر<~:犺MJ"]bJ44Tcmyqօ@`/ڄ.P  X4o]mnVV]Y"R%%뵙-F5hb~ww, 4>8cWݳGL8kQmЄ(8f}P_֡R<M< )EE0!"jCe=>D3GP @w qB0Gp@15SZxKZTӃ wݘ*7kN>8?s<_eh^^BLlI%s,n|snU}%D'j}$oĢWpքO'q*Pbnnw8gzuܨr >=˯zFq|gOz ?t{?_nu!9<\l>-G&myYwp.jEG:- w2+;jb<R'xm}^eYߟGrob+A&Al۽NKZ7j[kvelIBE> hGBnޜgj@݊AP }\I%ޠ&h,UM&vU}H.PsTc.n9f'…d̪ubI5fL֔/ ޟ ދO{+c9kF` kxUb~ZZO`1_䌬ygnؾ3 ?yϯ# EK8=}vHv󥁋v;<ŘLG:j.$Q/*Չ>Uv㧡|i":hݎyn9?ڭ E&SQh@⁶<13)T8Q$ćxQ*0!!J1ؤhݹlA6!F{Jؒhz'gDJ0Ƭ9e `d)!kQ48rZS86X\S62'>'2E)8썠vâӌv)8dm!N"={naO#Zr5^g?[a+Oeĥ=yq*RfWϜ\fŻsGyO#9j3P-]z_v>h]t+16aARH*$ !ejWRL E( %58j]:@Nv\kZkkjAcS㹱YEN}YJ8ڋ|ze0z̍9,gtp~A!Jju#Y["<#ˬm$bfMlKO~4I>]xYPCҒGHDž7μ%mi:f/~oEdAKw+QBRCGwDZ= ܐ8%mKJs{y$|`1"YZf/&inq=^hn\;kc׬QE=}F6hȢC)wBh,Aod jPPp TOKe;o:4&_Vӣs+r0B_}Oh"JMD6ʰH+X$4L H)9K4ל%bS# =LʭqP靋sw$ŠJ.wwD]Ϛ4ĦEŒ6Q a(EF(04.34֤v\)UK\6I )w87ˡY^nA{!۳a z?tױVȤ 4#`jq? JIw,ؚIXTsyIc>v#Ĝ! 2dNWNl2G!xwQ&*5`DD=E>ԧL+2.x?cHt"lޢFE!qG< j;(X)Pwah1@ľkz7q6*E$W;mQ -9hoe]綿!-gK<$,}8A \]U"5m*RoJZ3pK)#wy| FGԱhEcAu͹]T;"H;ϱ;~q0"J5bs>{mytv)\;gFܛ@Syn#C&QKF]<@ ص%KZ5Ԡ)iԴ@!J[W;0ǼFm,v ,T9c"@I`a8i:bT]0(oM +[uIڏ&ɧ "O%.IIvk +`~3)b:釶4E"Bt'q-dMԑ4bQ |+"KQ< |υ/~O0,thҍtįYF~`/R NqO5_E/xR0%v/%p%TUثdZZGX' !K;KNXFe~;(SI"P(I PXY~h JQB'1 6򆭥!R`L"rUӐ?S1N,]X1%7>%okFE\eF0kk=,ómCݰ$=%JR#dJ*-uVV`0HF|!5x}{\^z.03̾9 %YNkuNHQI0So/7ZxbVپ!,teI(%乖Eaܠ9V1Jp*$WmuTH6c6+;ZJ`a+_ltzuOWL/VC=>r⻇ymv֟Bܲg?7do}}X5 o;G j0|3*]nɁ*/I4b!d!LG$/)6$ K*\i&-QF\TeJY)Ҙ)7^UH-+5P2R_H-7!%$HhfiDpV 7pgm!Bꆁb|tOmb~]ҲRL3RR`F0u gk>V"]f-p4yp!lZSx= 5\~nԃSl;0$ef bYǃd'.‘ WБi?9Ԃ 40MX@. "Aj%5 2bRޗ\H%o*!;4ʰM9|wnI6E_wCy*!1mKy@ևsM&L#]&ED %~GTv=N {g%B޹nxjB !oA]" i)|1CR_mKRsI>R`.io&aVJԾJOJEB"ZCR_mKt.:i+՗LXηJJK}-`L+rVzHmVJOmfj[X k_^2]F®Jx*ި)Q%݇G%BF%?*E R) 4ޣW#X#Ea&]*߯ #OwGzWAaҽ]{z"+~)J'\)=M^2uhpyA)@l"ncH.Mn-CDW9ˋ|EڼbJM.HdIoY ,Ȳ9lI(bsK0~}ؼM)Jh2Dt,Wtĭ΄(AkP@q%2 xK! {~@Ug*r(* fH[-r0c҅C\q؅10ah#䁷8$70{7# H򍬍9­XNR tJ302w@a*![OMQulv_J\j2V /'Lh:l[^BRV3G yi=v7!m+ך$F1.}+cラֽgAnzS!oen"Jx[ghBrX<vbY ɘj< ɣ^X#FI+Pnؒ밍k6?uخ4I Dn$ӍXPhKפȄ.F $EE >ekqF7 [2qMV5nkVObҸ&Jn Ry̿5iq5qS)-I*-I, |f5aР ;35` 5JCǾNw]AT(Fc޸4OΘPuH6Rk`#atMvg݇ 1uGH.5JjiLo\3PWƇgՎF΂<f!?Ϋr'C>yz H~އsMXvVl-I6ath-=a2лa!DlJ#"vK haLm]cح y&٦n\v|qM61<0zy4ږ4/+:VhVzHmљ TKBj[?W hH}-N/Ϲp^bV :Jky3elhAAy@g+ :rxiOJ9RjqVyRl'm+>K4 @c`__-+Ǎ4#^mʛ-i{Tvo<{&s<](ԓ;wJ8B%݇G%Rf%Vfzy ~yኇ'k]N:t[*b*?^߶q.ae#e.s-LU拒PZIjV3(Q6$0CTZH} RBL_#D݄;N_Y\1ө%1d#IfuyxwӀ =롤3`QY^"4"LC#ِ!%07'6q[B[+#2-*8-5Y1c"H ]Ƃ2VX:cRl`a( aaT􇠱yf&Qr 2U%s%3%85\aA]LK)i3`m>IC,N3R[~\>8Ճum_a~O9yhiwmb4FMeZҢEi)tBۜ"Yf9fJ;M M^ڬ* U`0T{2N-ob᡼(LTek%J[ ,jE14 ١3@ML ܫf,.v\S+8wiZ4! ~MQVPbEKCRiDqwJ0NLZ$sAۯ3K'1 8jtMt8V+TՔΎg$Tr-_ϩA'$o)Q"ѯ p/%;1T#+`Fet4nYh46(@.YPZΧɰ&Ai~IO[9@(L#_v,* c۽R']nLF̊ &)'PIeA 3tPGj/S]>R>|f/~4-A%s! );WH1 |t4(JN?j28Ĵݡo;ߡB}O۟hI>fnDC(;Ҧ'k>Mj=󚈁p.uwTcULwdHN.n0+5҈jGj0 DS1gLSݠA4= dB޹&ٔ_wsnNwxb/SͻgHz>,䝛hM]&wK tR#ƻMq:rޭ y&eShSh!f:u>}nPx5ft$R_mK h, ìkMiV?=f3=(NA4ֻkό˸{֙䍍U]p_,4*@_CUOCj6@W:PWzȐNd\謹Wf|2B- .{u_VcWI$9)LUgmϏ6AFרF=+O Z5k֐tSNlmmN"Ḇc<DX28 l"cݬ I84d6m堀$ "^q{8(:$3]ݐG8:kyh%empt?-ѷI[t?l70u~ lGRho-=57g dBAX2ʊ,3(EYT(Ӑg.KaNhfz50_K|ֳп<?n'.ip &ԩb)ppP2nPŬe۾WsW۲kl4E/^f_ {)<9O}HYn[<5gQV{}koWMhKjy?h<>~&ڜl{ A-O_OQծ';|z&fM”)~{TBӻ I*4J1[ʫRAi4|Uf{kO!QAucuLŜ=֣fm7XXhMrĿxŇo?^.ʺeQ}Z+oD ws[|g9zރه~'pw>P@UVkY&/)-ENPT%qn}=p;1FlST-pL*%'^$yEÎ朋b9VFX,M~yU_1O25vSn}ϓp,4=kĵ%hɔ[CTdϦi̔ƹ4?_ GCޡZy.E=i-[koMKn 9W6SeYI?RlfaZJEJÍe̳\#?{OƑ0deݎG/3ፙ#NxiY@b5qPK]Weՙ&0gU|OΖ}j}7lUY?V#vNj{B=xY⽙mu;\Mq~v.WFF@$ l!,Q6h":JtM2ҁW`D!F|5{RxWI*|\7q_"l8z,-ͅL,ϿuQ@n<<c+z9Y^_P3lN П/Hn+.bhwM`Voo] ֡lo]HZJlga:W';^1wWWVElztf4G) hk+ZX%6m4~(H ѝDsA~ jQ5|5I*VٓA~ jYI.JHyKˬx%}Ԟ1'2^Jmdfz``ևsuȏ)p/b9~g^;oW-@"gqޟV 91%gls{\ ̒ʤЅY#9K6fwѲ'd|Dbk"Af ȤuЯ7w7qh=vO)7^tU@7DPo&U.Ynԫ/oW]5Ϙݶ(oFZ|KĮ/#vܴBZ5Qn#N!#/ lBSԠ{OI\Y|"R=x#'Dp0֞YdgB:^i2**hS6rxorYUnTtʼ&J8$(3?"7Z͕Vzwjc;W(%`70 R@lk9d-$˵]YA^IƂ;5ܑj[ Q6Sn$-}}ƞB_x8.NU-p"}/) }Hzuku밺KMXFU;w7} mH|?|7 V &{TX $nu>isϻ{}pb%'Hr>b䈤/WcX{P|nRW(UIVQ͓}n'|಻Av⨵ GUŝ7Bkv;94X(=Y 6IWe} ݜ]n>,~<[}hܤ^Ӌ0^m=n@W{yKκ̿|*-c+@58hn_@+oc¯qyDw8bpy0Fq @UO/;|k#tGg)ɬe)C*(Տox^nK;g3[<&~laV YM F]"ARrL%%byH 5BO4@*2"nϥ4{S}b 9auj'Du}~kpY¥#FlK-lR' cI#yA( O]Y3K[M~->|t?kAJ[Ps@qXT4$rl?""Z.ia_R9&pAs^wo[+k3+'ɓIed.<71ItJDIĄVxpEzB7GJ)J9ORKsjww-PHd[Uh(u J_sK.BBE u1DtQ|'#ށ'.KosW޲Se]Rmrb>SB:=*諀zP#m)S;LV!U) nUѝL;+_t4m>}++a"ɟ\8\9@6uj)+ *mGdǻ`ew7CtsFU4ǐ ^}kY)Wpxa[VDXl~|qkk;:m I/ M>_\:'a8BVʤ2%[?L,ScEdPC.&c&/GQJkOEV苦SYEcvNɦz3Cu`q~S&X[F:#@\VdRI?`_Cբ?hH(֊}NMEI!YHg `Tw_`.V.}u ;E vN۲n.&^Y3\K6Xfbe`B0YgN xY). P L/חd7L*6q%WL 6AgkoDBt̓2ߍ]\#:}B]]/ vuB^(sOa>IrJwlnY a34w.!RMDq ME"3ɘ0x1Wx{7>V?!|q@^pt!v(!2MN< f,( .;o7ۗf]y.7&^ 2]#z0G>N_hޔ"OӐҦ6bm˗&:9dIJ%2{Ų']^OϔVװ>IXm$SG+,ҽ3+\-vmk5?fʀU t FO,Ppӱf&EX9n1Ȥ,$( b}4j&ňKE>4:+R.n:g3W Y%J#!h ̻@4CʍWK|uͩՎ ;tm K9ght&M61%<7+Cq*жr d:d$/')&uLebV"VdBِxv_Ĵ l ^O,ټteD$j:!gD Ēw^/,~lvZN?M&% _i= :(f>n4Ṿ#`pC?8c1,q"w9dN\ VNsW~2;/Oe.m AZ &&fV7Zj1Y1yXTtsV AV2PF#M΀rCUϪ21Ȁf~ R ɡ Ɯ1xAR|ޒ/<" x}jGi =:t>oӏIH1(+&hV+)XD)Hdtj,mF5r3\iJatuaؓ"r! )O).ίk' 53mLx?V:JTJh{|{+[}"-KKDx练qO[U\`9zRP V(s \T //9nQ[=D!bؒW0{,zܺf_z8#v{ dT&2M~A# DʎgB'pc=*gq|cRD4"Du6$ ,q˂\?o?MC Eo9^8zNH#SCr2ux3Γ.`DI6oL:BS(sJ.1hz, JfdI64 N2F" C/E{rXÍfFݾ=d-=1,N/L!e& *#3D[/˂Z}_UyqBA)iCDQȬ054|/ntgp+G${Vx0*̋(́V`Tok}+k|s=qzp-in'!BΡ֭Z!Cg?#YBO3{W CALL&r38UM$*R mSԡy2\NګQ*P&b܇Ŕ[?5`0ZgXt5#l%2 %+qBVhv|v^U&:^Ԯ{1咄@b8M <+QMvyFiGŅ95*DJӑ3>(sS2 d:~[aHei؝՘\dpME)#9S:H WBN.\OB~垝xRejIe.2Jp󲸞{NIu |v=@1зOZὁ)|#$o/%*R3W+P\w2@w]3+׶:KŻmj]8D[myaI(^; 7eϻc)ZXFVEhWK%kb $u_]U2Zm1G#ӪZmWA/m6Pcv=nk`vedtQe%ale^ZYۏ/M@3r</s}ɹr'fN+Iz :Z^}0FC8.2-YT]h*sM?y&@eC&,Ђw(>_>%0 a{a;Q*'i3? GkV(Yqq*:b2 6S9D;0euLz#Oi!%rujSՅ76%pZ,**pZ3I3P^߲L)y\1Cɤh]M-%ӘQRaDZF- o@Y߲nx31fvl J u/0'$5Ev^oį(a&3w75;~t.]Jk~69ngk܍U^oC͏<;#v- U+fm΢{fI sdr}/ ,IT|ɵ w} bGŏZ \ A~SP}7 XaUxOaOac̟tNʡLzȬhY();p;nFӉljoM6SAX\ȂFc6왝u}[x(q"|ʀt xɀNPjlJA?+9m=;`L;ߌ_!-.y1y%E%tHp-m?Rc:h]15Jc!wE9/>Kd~/ .n ": r8g"ɜ ;W o"]o"g#z6Ԇc+b(̦B3]b&*n/[—T&]}\Vpt1Xo2t'&#h#:{6abZ#>i+hx0 !y}IG;iD c=UY_&ky9mmn9ɡܙ.۴G`Cbu 1]V6W|eN"މ?ZY[G'd5 ST6Q>hm b'u7`Na0UtB'٨%}-90)=J6oKL.W#ԙm.,+j1 .yV5>$ziͨ]K=ҝ)yfy%l [^<#GE[r(ɘ_9="nVoFe>h=~ܔq{vYMśˢ <(? $_0\\qr˃0-Ҵ7hc pR%O"'I~f"ꉼ;_7'8!9ydX޾91 1k,EvWV06O8 Ƒ'2"'9s5X1/LЁG0:_"|2,\9B[g))'Qۡ|9, 7t5R`}dFa);3m0A)l:7YJyEZRE2Bo\n{q`^Ԍ/8km>VaXW}?Zd% S.<]Ϛ4 ]Moᮤk٠qby_x Nݬ9X _~ ~DӝQ槃I,f}"Rm w-:6_rfZ-mᾮH: ʗ~VJnǜg~3jeQ)ٞ/Wzt?t962ԩ ۏ ֗sŭu+߬N0|y#ş?A5ʟb7F,Uڝ7߅V`>1!ZrCrI>Gh`QDdvG5j}~Yܠ?K䬕 턖v?[4F dYڐ pYM2 g  "$&KrNoKClEWw(VEwP9:9>nZ$.U~FCْ9iT|EK?x^0-H4=`$7|רoCՒjUlBsB JF_C*YP 4; uKj*Kz7p':WːiI@C-hZ׫5?{Oȭ_-0a7@`lY1y--Y><ɺXU$pSCiégpRz:J2$|@CICݕ#э٫4|LM/PNeβ/=$3!SɄe$Z  {ozhHo5"͔lǧC`A1-h6'IrǾ}S=qE!ty͸GOERRZt>DžXo],^+Df3xxlux']t”};'g!O:w'μ\4-&?j4stPFRƘHg]"Iq8d>V&Ti3TΜPN8f{bF("hH81,Ta_"O]1,q2y7 Ui*cStsqpL dsl*%56.:J"I;85e#J44l!MC9o Oqf ^hSin]4fEخ.7} {|\^,8!jAwI } ݌!1r܏~:?; O` ۝)*ˣ$= dG]BAiQgu<!4Ql8bc0]DQyŢAk V9m0 |krJA+y_/F#D( 񅒮z<W X[s\tALƺYv%Ži˫s39Ť|Tҙ ;*mh7f*`7f*|޴4W!.zp-2?SLHh!i5z*i[>K*K=ͷ$(DPɛpsAC x?q1ǑWɜfk?t; ]rW.O\奥Ϸ`|Sᗾ fd niK{V"s>87GTiB`D:D%5EsT[(trala  8Q8&AL+ECCk[Ws_^gKAB⃻:2.ZJ#8s[ 5hž{  (Oh8*yK/E3R[,ʵHߚ4"/JiSKnd|VgP4&@K)0"yO>OMmV\tqݮO['7 Ƣj,6Jbe wBt(܋RO`>%sf55׃X \imXDH;p%ZbcXÎZ)8sʼhؓ1 08bwB).5zDBf`HhF.T 4HɛDt.9!g,5 ufm9ci%jCSb({mIF%R1,4j#]iT"P(ňEҐuBnbLcE9^1A&uDeLs V6n=&5BH3E2KSJ,aTd 'NY0- j}5ygHf#t(`esN BxǰiBуΩiP v>d]x$iӋ4" !(x!ֱZsJKw~v-^fUFBe&iH򶡇Th?4i.CI Z;l6Gm. ^^) NvGF{Ζ.ly.sxC|konc_!9zo$?Eq -$-gɋu`/B'ow (~6bΏ#o ܳlNHKU5sv| '?u~`Oꉐ M|x=Bs2Pcc ]k 7 &,VEy3*͊Tɖ)5󸕗^-*%CQ=̽@U8FXۮ=@a.!=HCJ^Ɲ%_BKPin`xwCq]Ph NTB+>\wڡd;CXW:Ԍu|];GӪu|}(u|ELQ[oj7Ne[]ĈN9hy_nL#[ELZ1|4emnui#:u稢ݎ`Zi7ڭ y"LqZN{$s"ӻ7Azfx%=kQ~z:t>:֔uDk""zvN1p19 6WL("|~V0R'S }j%xZ`=E̢4#)^[oT{"7x&W` [1`b? T_(wbMޝq^25F*b=[<^t4pdJzq?ЫFl֛btE-s!L.lw"FGOcL<6)"9 -7Vm*B頋b@N$$)͒}?CdVr+[u[߆65n$cMcģ$ER'L (]bF"u*AƑ~`ZgN2/ԣ-yHRHٛG |E"rǜ7oZi$/ruG |%" Yj&Gxإ^`ɍ)T%Je`w?nF_K.m,=,\rky1`l{ΚJA#5n=uKcU+b=0*?(,#"C"uS5iÉߖPJ@μY6 š3ߝ(3 _/^F"LaR1@ߍ~L©*XâLELJlm^P<WU,PLUv?p[4œyX><<.Vc[_%˞E>o4GT>_|)˂(O6bhw˞%z"^e=oo93Km.kΒz9tXJb]$%ֵ_pZIT4n1-t2P'<ʈ3 k͌ eI -JqJ T`ތŽjj΀ Il qZJaoo4a1I s)&Bkr!MN VD)}^kaL!xH?<(`\.9_F?>{=oY_WpvFpt&7qYM'n u:@}]z1'`j6JsCL"M%2%80 p1g]wDw͝viQ5hDuuۼbas4V/o^;$BCpD$bw#ޕ*&j4a*$M3H f,DM)7cD,"9d& v\Gް1;kbbc,n[u܉FlO~'ORQ F)C5ƬR]XCv!q%#`}5/^]XɼT}L|lE< 6џosKz#|i(70$n;MUŞ&@ )zLR)K ed0&A33-')W S=4/L p ӏXBI2Yb2*Ht9 0d;k䦈 Iˤ{QmzCb z;O8D$:"Q *I}Ơ4\oΖ) A8;]B8eǤbe  xxcJQ!CsE QENL3%td ̳HZ.H( Gd^:,e΢Dg-M{q#q*V/${s#3ӛիQos>_CԯRޠ*m?aS8]{|c 3̽ۢ'myl٠.,Ck9õRr7g zFAyoCآn~nv9 mT? iYxN"z Pw @mD.%=\oaJni$ҵ{*sevX>tZtzx[wuxx:hdfv3ӽAqBͿa{zGqf$D#ĔlP^_Z hk&Twރ-C[{C%|gԓ!CȰbG5eX*qlΏ4iͩl+֛JkrhzťzZѶFGsCƓlFuWͩ ;xJ5f{-@浊DE a`aNGs?1ofނ/{yZorEF7/ھc٥2 n,GcUww Eej*OzZ5{ n^HCqmSJn8"GaZ JuZunw1-nZ*nh7tTXwk cnme:mQǺ yDDo֭}Hօ|*SYvݥ5Kk[Bdl&1zHC@jjUrcO1D^b%RњSRR cR ɂT9RT t*\R-ʀsUZM4MQfQj3Rhdec.L)Xuz"`b]rD*ؐ瞰fZt.(۶eX9HMiU3{Q%W,h#P^ *;PP`m?\-𿨇ozzK\iPw@SR/\ @bOtwtb3|H cDpythv '8Ycu1#lq`+ovl}V\}֖dA g ]6$J^HP%TUJPfY3ZOЈ=a7놧P: 約c/kc[\vx yPI]+{k?ըLCfD)K)|KvҎvh3U|i7`q>}b>`[,ڗiwL;}ѐ+sK:L??/[õsCG'7i_~7:;TiCO 򸩱tj!'.J ㈁$ VK$8qu^{֣ G8cɂeZ-xgҧcT^џ0]Nn:G4SIp3I罿mU>=jо>2j3ޭWo(=QXIU+0udx\ʎA7d__%@.{jsb\[EM }f8t#h(~0eJt0c"ZyɴJ,[.Rw !8P[U&Fb;,)${S(G^@"'hVL6D Ipx dj;p5WFL}TJ ^S0Z)~r냼Mx 鉍W~zCbŏomx{\HUoWۏxXnd2^2sE?g&n&3 r?b2}c+@jQjM>>}uO=RnJ:7 7?:5rw)bo41Rowfl~Q,d|X'Kt\^]Wx\pFTMj E{/ΨJMߋ:VSJ(QuUV١,&.<+r\SjVS6Wcժ$5Dtثlis.x{k Jw$Tw7eb:<}YoATl+}•O>ͯ< ߜ ۷M89# 8bT{ǭ\zpI(uL(f)4hR kˠU{8}PF$U7}}:>EΤZڐP0I,',ڬAJ-Fr21kAislx\[[,3*T4kAIuH>Y)LD1v09=g3Z3X&\lDc۟#Ef!w13 # W. "92Qq>j@'C2;mRОu_Hc)rOq yj_S{LQZH`7K.7@ 8c6ɳ1%h-5޶# m#+T3̍lu1D%71;] CcxP.gbZzƔ~uBg5J ѝ]>䌼1zu+WTR{fdlrLrT,2;.%a!Kv!tG0i%!0%(ǣx,,['a.b6?3DpPN%6IJc>S=\Q-G8p-Xq`p`2\[V_ɾ깟'2M Բow_FNDmX"A,zE\#xzRؾ(5["[#fI&8)1 I dbQİim3N[B')&ן|Q@dNdB߼lpt.qm:ŊSF/( c$D8ꈡQj}&65)m~7h^\WzI672]OwLO$ݴ @8Bx/0MuO7]6]ă6{} l-9D:np4T%R0!FBstPL'ΈBrOC wޖBJs ֒ICaM*|%34J3Lo4D= ܖ6JuNhx-T0Ye-#!^B(iڧbw^ t_I. Amo mzuqR"c (C"A܅tK0r>OrvCu|Lf? 8Yȏ&::e'i9E9k(&' IH7q?Vto2_e-uoXbc$GSj f8!h)Q>lm[H ܨ\ ~~n *_UfKiӆI)H2ksgL@*`ѧ̓MZU}zKF^r R"_ɹnQh64/!De@Ou4͖KKc]'.:; nAG8ʽ:\vewڝ\DwpzupqkrlV!l_6^ѲZ'z#QLJqӒ~ݢr^'#Cd(+D`I̞t@3odnHc~&s9HfUmv]R! 8.5یZ+v{ 0NߺS6\{*fOZ=N&Zx9zԜjKU$vG-Cr7#u~۫!U9CxL߅a6/ {Jfia< ^Z?{?]=&~OwDP:X b)g/Vº-Gw*`٭kpVS :?)/~~ݴU[[hg{qj('nTn]H\D[8i7'?ڭ- Nw4n"dڭ}E̚ڭ 2|IWIWكᑒDT%o(i1'>?BxBLh9"bIX 9 KLo)?uH~jV+J -B˜AkE7%DAI#RlývH>BdPC<1u[qEV舦I<(h&hxQ~%u 5ADH$La>ӣfJ mx<9PQ&`zZ`6dYrT'?1`-nnT(TPThd5ġ^欒~_|:YmJ.KYYW8SrSLu[jlU i !몤?N+vl~w\N6 J\QD*~y\uzn}7qvmpP\M(K:QT[\ =ZՆqf,)Fd.vO\g bGi\[Oewum_ا^ 9\o7( @~Oۉl+b{SvWVG7rֻ,|wnNڍ(e N!j-V#>ZA&}r`Lgߟ1?`pd-+8`Ml1-PB6\e8`[y/ v_|N͡rTs1%(ߜ%#}{xo`]yYlF$&T{rl9q돃H]_ GerVN K)sw e Gʾ<v&^U @09MW^PG;ϟ//H%$]6'BQdBC͠sfڌgk%8\~+F.0dfH>J #.Y|d͈l{$ziiZOW7jPJDTXM1t9ELNPf,/zk}Mv_ ӰljZ͓C*M?eh1THEDtmى> %v]v4ypo)(1g=\mɐh_Ys*nXOZ$of'O?rN]îI9oqN.\K4a<<$GΨś=!Ƥ ؔtNs,{*Tl*N MU,zZkO=|l}IjU+X< [ot6ç%փS7{ xM)S(`.|5Ii lʯ9O 6L]u]4Fuu|0pJc/%TqbBb?F:H:/ԅ4n}BRrPX=WTj5&_glF3r eq]{^ Ar \ O.#8.ݾwm޷7ɥ4KN)MkAcDȦiͽ[| 4)A9:Y*!s:@)^.*uWYoDeߛ&|28"78#+A wћl22ޔ "cs0-2x=f~z4^Se␐|nW9dTNVĈ}jg8sRRe:"xoIzZS")=ލNt0(:?ɁmamOCѱ b=!z3BéT ;,LR*3*CBjOBIBuUy!f*L^mkПոގJvKjoC (/(LPU6g΢>'p&oO-yߛ|'7k3w K|1] R4|rϗWyTBAjeԽ}lYZ$:Le*Ál˸]yA.s_a1pAȦ^;KbH"0xq.vN=_l|9XVW@T".rLE:ɓM_9"'HWogmhLiI+lFh;sX#&n@;XXkd?WT;|WH~rH!ɘ0L c[w"9k2޺TuZd?@V몀 €Sc,~rbyĻ3^ 엸t od`v&;zԕO8 ZtI!h'İ= go^1㡋t,AnQ_O/.n5][4`E'Ķ$SxU^Wܝġv"! 3b#?&.3bEѡh9v2ǃ_VYG_q5!dɌ(Q6_Qt{rp'a&b6OH[jj,TG2rO o4Z`Ys(c{wHAz1uv'PCeP}Ʃ"jk{3f{xPJ1sa{9{Z}\ {:J(zG|$' "a97q^j;A6hqxMU~0Y5;Y۴q PGQm5^URq?K]$;m.H8R}t9v( '߾Y'8$qlu>qZV)u? \ 8SJD^_xV`8 i;1ȅ~#qS3~y%<݄BWJx]ɠ" 0;!Ej>$MKmoGCw4PwŰvE]si͍#`N\쇧8`v7_d2va-dUo\YD[o{O]zMMgWWc8-r)+Uۛ:1LziiQKEE--gcisC%}*T j6B'=:;_ձI݃~zFz N,f^^iux[]ͫ8_N/}O)\ӡE;q53b>xVyyB:>[7eFgbȆSL*ISPeT!ɭΞ=/ʠ͔]66PLIl88q{]>7Jg`d?[e3ɟhdRFC牳  (6`KohbTQ6憳 8d͡"R(_OC.1bJT"Ŗ gV؎,w7;b?<5=50s(ƏR{{\mzms)q\ !LTjzWN7Ojj]/'o]p\o>@hc/^ *H d6Iv=L3|j[S+*1}0R}╺[׽^ŔcOb/.( 'e&w4u~l(k۟iOi'Th:ӡ|% Q"i: ^|zY4d#G? iaǥVd>HQQBl'ڑGb+ErUYyq4v c aw )z(k*rO34w:DP0>nJ/1No* ABDbPQPŤ1qy[QNA>aqM `ct N9=EFE| YM{$=]oRo0 ;>4/CAs2P(iL4B&#E>wKn;";!N.7N5 @;{h ZO+;;|qʞ~?/7݉V[KEQS,A"{ jK)E@G:@~]ߚ7.*nrPEсbMZL47BfWDؔ#PO-|LYf5|Oc᤟:{m. rz^x1 )- RO ʭsoڜ3kH+0(W`-Xm%%"T 293CVo~!L!*lBI] >"Z ?9f k^R3췰q Q%AB`ER*8y)2*DSԧv }C4.>Nu泙uߓ[37I>&>ߍf%]B#2TtH>L"5Eg+g@xh2'&"5fTVM_/Od1=;{Lkab<ݺmWHA5YybwIgƆiߞUӣdn90/6MG)Me60ĩHI9LgȌ(\KOIi;<8Πel Isa{vP\5/#gqk;r&>/"o?MK23cNu ^:u1'W,MBdc9Vfdi>C UC,wߏoAoŭ+>[&l6>'>-}gۦc7whr g'w4D=CC(ZY$Qʼmȫ1Yvw/;[\qO*f?+G'װby`Vή|T.nZ;݇w%a73L~iF;5c+8W@ۤ|#Y5*- XӻxazA7pCz1g5Z(at(wqkC6<6=SMĚR¢e:`Vp@Ŭ\^$>-v$?ӻ00>O~+` ~& PblsY燋^3p;qDtm.в+WmJ_[S_W y.gKEɷBtoRD}t*gc!/DlJ~u-Ӊ}G֡N֩λg[MMq'V6UKC͍3zwdzw]jLUׅFPЊk!C[K쐵wOfuVu,[u,?n֓:rTdՒXa(~+Zo ܨ `Հa D)! [ɺک9g5ݺ|`6y/8l68uV 孪QBH]t PW@Ď_U6)-k*KGpIEء͏Z(p$6EKU+ZY)GҨhin!~ȖBo(*]-%i 5ު{w]@]Fc taO,3n77_<Gݼ {hASf:ex;H9{E "݊j]U+ZPZyYi1z M#OfO͈bcF!L.ybގ^ ԥF2?mxM2PwT- D;\ 3Wrwf]_frϵG!b#;ngd 4$IF:1ޝCK@'y%|wcw q?*!\{p?B'c &K^,/׮~[ϓC͈LP||ηGwCIR!`?L[LYNO&wr?U=YiΗ9}-H}ry/B -U &ʿ߬o0Zﮜy|B_.gɪN~9~،qr!qg5^{IBrTL7Yf@MF1Hʖ:IS#-Q^ ݚ!LCsQuwREGLYjA 𠶙vƂtGXZ~$r^ R}Yl jB^F׽&nN;x]^\tk[M4ʦRvqny7 >xX BL'u1g U ػo. nmX 76n5l1*֐'bnoZ K5tj-uv]15&_k,4ghO=7uQ_A"*uB'):@T$xV$/3{aZ {EdR2 )4r]5<˭TZ,KS$r$(&_k|5Ivez怾gü7,7ɃWK,`6vPc]()Te/&j+8쮩 f40 $w@JBD$1*usTjs45J;PY SXfB 2G8E>-s#Am5\ L&h̗(fZL(XJqJ .6V _OhX ER|Wןz:^n6vtmz&TDjб*X]%U0v"P O/}LmTSv^hNcox\ZXVjTIÙ{Opn5Xx.ЀA|7Hef.&7=V^<ɉyk R="H9{>gM9&z\%0-vKWH_O~33:0%X]K')x',X R}Izh{Y'u{׳)!MI 5"@4 sOSMHQƕ6\ 3IJ:FhjKm&nfS+[{ɥB'gDR QL"39LLi.92"g=BҒ3hnP n\Ϻ7JJ)*701-sӨi!1ƪ e:%XvMIX1Zx8zcf1C dl4:wUaNs0ZFR_w<,%J`nb`TW %R^bph?C&1x1bDČn2oc|F ^88#<a\1)k#c b(PS8 SbN!p'ڡZY ǞH3` H`DHUgRaS'40 bH(Êw=AX~:zhO?] u dWF.FJ–@Α5rU^B+{H!LHi= gA7w@]O[nf.\ u=J*Yi 2p+o ipUk68A~tP,0V8O?ܗL;;' _AƥTA 9D}]PCpcg꛷ZsG*GZX^vnOli/'fV]ѼZaxjId/([#1Ì|G-֣yUS!ektJڂp)PG-i|#zX BL'u/yJ@ֆpJZ_#mP/c11wnǜEh!!u|{wkB^6)eB갥C"-|) var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005221041315137424504017700 0ustar rootrootJan 31 14:58:32 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 14:58:32 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:32 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:58:33 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 14:58:35 crc kubenswrapper[4735]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.094156 4735 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105695 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105740 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105751 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105761 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105771 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105779 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105790 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105799 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105807 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105815 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105824 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105832 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105840 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105848 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105856 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105875 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105884 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105892 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105900 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105908 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105916 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105925 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105933 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105941 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105949 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105958 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105966 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105974 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105982 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105991 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.105999 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106007 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106014 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106022 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106029 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106040 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106052 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106063 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106075 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106088 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106099 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106110 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106120 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106131 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106142 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106156 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106169 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106181 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106191 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106200 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106209 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106220 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106231 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106240 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106249 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106257 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106264 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106273 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106281 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106290 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106298 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106306 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106314 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106322 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106329 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106338 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106355 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106362 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106373 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106384 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.106393 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110107 4735 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110148 4735 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110166 4735 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110179 4735 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110195 4735 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110205 4735 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110217 4735 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110229 4735 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110238 4735 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110249 4735 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110259 4735 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110269 4735 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110278 4735 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110288 4735 flags.go:64] FLAG: --cgroup-root="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110298 4735 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110307 4735 flags.go:64] FLAG: --client-ca-file="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110316 4735 flags.go:64] FLAG: --cloud-config="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110325 4735 flags.go:64] FLAG: --cloud-provider="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110334 4735 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110345 4735 flags.go:64] FLAG: --cluster-domain="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110354 4735 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110364 4735 flags.go:64] FLAG: --config-dir="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110373 4735 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110383 4735 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110395 4735 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110405 4735 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110414 4735 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110450 4735 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110461 4735 flags.go:64] FLAG: --contention-profiling="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110472 4735 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110484 4735 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110495 4735 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110505 4735 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110518 4735 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110528 4735 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110539 4735 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110549 4735 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110561 4735 flags.go:64] FLAG: --enable-server="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110570 4735 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110583 4735 flags.go:64] FLAG: --event-burst="100" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110593 4735 flags.go:64] FLAG: --event-qps="50" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110604 4735 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110613 4735 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110623 4735 flags.go:64] FLAG: --eviction-hard="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110634 4735 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110643 4735 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110653 4735 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110663 4735 flags.go:64] FLAG: --eviction-soft="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110672 4735 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110681 4735 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110690 4735 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110699 4735 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110708 4735 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110718 4735 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110727 4735 flags.go:64] FLAG: --feature-gates="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110738 4735 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110747 4735 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110757 4735 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110767 4735 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110776 4735 flags.go:64] FLAG: --healthz-port="10248" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110786 4735 flags.go:64] FLAG: --help="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110795 4735 flags.go:64] FLAG: --hostname-override="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110804 4735 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110814 4735 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110823 4735 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110832 4735 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110841 4735 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110851 4735 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110860 4735 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110869 4735 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110878 4735 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110887 4735 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110897 4735 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110906 4735 flags.go:64] FLAG: --kube-reserved="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110915 4735 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110923 4735 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110934 4735 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110943 4735 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110952 4735 flags.go:64] FLAG: --lock-file="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110961 4735 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110970 4735 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110979 4735 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.110993 4735 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111002 4735 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111011 4735 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111020 4735 flags.go:64] FLAG: --logging-format="text" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111029 4735 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111039 4735 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111048 4735 flags.go:64] FLAG: --manifest-url="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111057 4735 flags.go:64] FLAG: --manifest-url-header="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111069 4735 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111079 4735 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111091 4735 flags.go:64] FLAG: --max-pods="110" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111100 4735 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111110 4735 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111118 4735 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111128 4735 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111137 4735 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111147 4735 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111156 4735 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111179 4735 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111188 4735 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111198 4735 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111208 4735 flags.go:64] FLAG: --pod-cidr="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111217 4735 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111232 4735 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111241 4735 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111250 4735 flags.go:64] FLAG: --pods-per-core="0" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111260 4735 flags.go:64] FLAG: --port="10250" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111269 4735 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111278 4735 flags.go:64] FLAG: --provider-id="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111287 4735 flags.go:64] FLAG: --qos-reserved="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111301 4735 flags.go:64] FLAG: --read-only-port="10255" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111310 4735 flags.go:64] FLAG: --register-node="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111319 4735 flags.go:64] FLAG: --register-schedulable="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111328 4735 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111344 4735 flags.go:64] FLAG: --registry-burst="10" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111353 4735 flags.go:64] FLAG: --registry-qps="5" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111362 4735 flags.go:64] FLAG: --reserved-cpus="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111371 4735 flags.go:64] FLAG: --reserved-memory="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111383 4735 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111392 4735 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111401 4735 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111412 4735 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111452 4735 flags.go:64] FLAG: --runonce="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111462 4735 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111472 4735 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111481 4735 flags.go:64] FLAG: --seccomp-default="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111491 4735 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111500 4735 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111510 4735 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111520 4735 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111531 4735 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111540 4735 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111549 4735 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111558 4735 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111568 4735 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111577 4735 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111586 4735 flags.go:64] FLAG: --system-cgroups="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111596 4735 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111610 4735 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111619 4735 flags.go:64] FLAG: --tls-cert-file="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111628 4735 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111638 4735 flags.go:64] FLAG: --tls-min-version="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111648 4735 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111657 4735 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111668 4735 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111678 4735 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111689 4735 flags.go:64] FLAG: --v="2" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111701 4735 flags.go:64] FLAG: --version="false" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111714 4735 flags.go:64] FLAG: --vmodule="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111726 4735 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.111736 4735 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.111943 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.111955 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.111967 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.111987 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.111997 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112007 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112017 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112025 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112033 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112041 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112049 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112056 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112065 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112072 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112080 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112088 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112096 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112103 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112111 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112119 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112127 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112137 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112147 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112156 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112166 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112174 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112182 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112191 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112199 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112207 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112216 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112227 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112237 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112247 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112256 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112267 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112275 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112283 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112292 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112300 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112307 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112315 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112323 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112330 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112340 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112349 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112359 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112369 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112379 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112389 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112399 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112408 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112418 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112458 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112468 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112479 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112489 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112499 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112510 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112520 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112530 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112539 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112549 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112558 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112568 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112578 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112588 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112605 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112616 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112625 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.112634 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.112664 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.129057 4735 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.129120 4735 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129291 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129309 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129329 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129348 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129360 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129372 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129382 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129394 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129406 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129419 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129479 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129490 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129500 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129508 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129516 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129525 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129533 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129541 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129549 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129560 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129570 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129579 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129587 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129596 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129604 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129612 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129620 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129628 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129635 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129643 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129651 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129659 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129667 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129675 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129685 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129693 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129702 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129711 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129719 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129726 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129734 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129746 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129757 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129766 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129776 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129786 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129794 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129803 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129811 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129819 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129827 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129835 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129843 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129854 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129868 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129876 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129888 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129898 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129907 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129916 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129925 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129933 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129942 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129950 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129958 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129966 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129973 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129981 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129988 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.129996 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130004 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.130020 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130244 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130258 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130269 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130278 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130289 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130297 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130308 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130320 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130328 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130340 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130348 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130356 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130364 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130371 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130379 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130387 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130397 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130406 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130415 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130457 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130469 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130479 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130490 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130501 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130510 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130518 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130528 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130538 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130547 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130555 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130563 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130571 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130578 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130586 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130596 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130608 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130617 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130624 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130632 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130640 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130650 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130659 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130667 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130676 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130685 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130693 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130702 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130710 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130717 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130725 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130733 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130741 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130749 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130757 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130764 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130772 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130780 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130788 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130796 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130804 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130811 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130819 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130827 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130834 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130842 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130849 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130860 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130868 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130876 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130884 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.130893 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.130907 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.132862 4735 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.140343 4735 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.140511 4735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.144551 4735 server.go:997] "Starting client certificate rotation" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.144609 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.145008 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-01 07:44:27.465382427 +0000 UTC Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.145318 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.193199 4735 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.194482 4735 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.197273 4735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.244975 4735 log.go:25] "Validated CRI v1 runtime API" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.317904 4735 log.go:25] "Validated CRI v1 image API" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.320322 4735 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.328665 4735 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-14-53-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.328697 4735 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.347785 4735 manager.go:217] Machine: {Timestamp:2026-01-31 14:58:35.345615596 +0000 UTC m=+1.118944658 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3e901e4e-05cc-4e58-82f7-0308e6f65229 BootID:2870dd97-bffb-460f-a7a6-a5d63988938d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f8:8d:14 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f8:8d:14 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c3:92:46 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:9c:76 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a6:15:18 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:de:0a:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:53:d5:da:66:13 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:34:f3:59:3e:da Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.348059 4735 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.348262 4735 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.348858 4735 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.349192 4735 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.349250 4735 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.350373 4735 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.350406 4735 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.351273 4735 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.351316 4735 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.353155 4735 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.353777 4735 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.398550 4735 kubelet.go:418] "Attempting to sync node with API server" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.398591 4735 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.398625 4735 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.398645 4735 kubelet.go:324] "Adding apiserver pod source" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.398664 4735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.403746 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.403881 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.404641 4735 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.404611 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.404754 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.405735 4735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.408122 4735 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409779 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409808 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409818 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409827 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409839 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409849 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409862 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409877 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409889 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409899 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409945 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.409976 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.413707 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.414749 4735 server.go:1280] "Started kubelet" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.415071 4735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.414973 4735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.416149 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.416346 4735 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 14:58:35 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425003 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425049 4735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425282 4735 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425350 4735 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425277 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:34:37.95752419 +0000 UTC Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425392 4735 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.425290 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.425485 4735 server.go:460] "Adding debug handlers to kubelet server" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.426160 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.426611 4735 factory.go:55] Registering systemd factory Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.426642 4735 factory.go:221] Registration of the systemd container factory successfully Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.427017 4735 factory.go:153] Registering CRI-O factory Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.427012 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.427070 4735 factory.go:221] Registration of the crio container factory successfully Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.427104 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.427231 4735 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.427286 4735 factory.go:103] Registering Raw factory Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.427325 4735 manager.go:1196] Started watching for new ooms in manager Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.428615 4735 manager.go:319] Starting recovery of all containers Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494247 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494408 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494477 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494499 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494518 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494546 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494567 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494595 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494620 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494650 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494672 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494695 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494723 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494756 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494776 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494794 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494871 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494895 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494915 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494943 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.494964 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495070 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495133 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495157 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495245 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495443 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495562 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.495890 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.496018 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.496104 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.496131 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.496166 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.492504 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fd8c41e60399d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:35.414706589 +0000 UTC m=+1.188035631,LastTimestamp:2026-01-31 14:58:35.414706589 +0000 UTC m=+1.188035631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511024 4735 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511155 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511178 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511219 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511233 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511248 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511311 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511327 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.511557 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512403 4735 manager.go:324] Recovery completed Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512568 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512629 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512673 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512704 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512735 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512781 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512810 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512839 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512868 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512899 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512945 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.512974 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513036 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513073 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513114 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513145 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513182 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513210 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513239 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513268 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513296 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513412 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513471 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513513 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513545 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513575 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513601 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513638 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513668 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513697 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513723 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513753 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513833 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513861 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513898 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513928 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513957 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.513996 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514023 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514050 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514080 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514107 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514135 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514199 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514228 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514259 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514287 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514317 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514344 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514372 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514399 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514465 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514498 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514526 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514554 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514583 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514613 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514644 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514675 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514703 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514736 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514766 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514793 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514819 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514872 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514903 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514935 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514965 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.514994 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515021 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515049 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515075 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515101 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515131 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515159 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515188 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515216 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515245 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515272 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515300 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515330 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515357 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515386 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515413 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515472 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515500 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515530 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515556 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515582 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515608 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515634 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515665 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515693 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515720 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515747 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515774 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515799 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515825 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515857 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515905 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515931 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515958 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.515986 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516012 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516037 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516083 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516109 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516136 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516160 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516183 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516208 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516235 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516262 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516287 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516317 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516343 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516371 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516400 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516479 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516515 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516543 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516574 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516606 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516636 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516665 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516709 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516738 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516766 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516804 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516836 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516865 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516893 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516922 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516953 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.516981 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517011 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517041 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517070 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517096 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517124 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517174 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517199 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517224 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517252 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517297 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517331 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517364 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517448 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517480 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517509 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517536 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517562 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517587 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517613 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517640 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517666 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517694 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517721 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517748 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517777 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517806 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517832 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517860 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517899 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517921 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517941 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517962 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517983 4735 reconstruct.go:97] "Volume reconstruction finished" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.517999 4735 reconciler.go:26] "Reconciler: start to sync state" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.523514 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.525607 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.527383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.527457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.527470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.533585 4735 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.533609 4735 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.533635 4735 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.536669 4735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.538693 4735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.538757 4735 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.538798 4735 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.538868 4735 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 14:58:35 crc kubenswrapper[4735]: W0131 14:58:35.540330 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.540409 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.626215 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.627012 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.639220 4735 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.726976 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.778491 4735 policy_none.go:49] "None policy: Start" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.780283 4735 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.780464 4735 state_mem.go:35] "Initializing new in-memory state store" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.828128 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.840159 4735 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.884761 4735 manager.go:334] "Starting Device Plugin manager" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.884970 4735 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.885002 4735 server.go:79] "Starting device plugin registration server" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.885662 4735 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.885692 4735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.886219 4735 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.886358 4735 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.886371 4735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.898857 4735 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.985932 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.987603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.987787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.987923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:35 crc kubenswrapper[4735]: I0131 14:58:35.988074 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:35 crc kubenswrapper[4735]: E0131 14:58:35.988833 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.028417 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.189737 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.193198 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.193279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.193302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.193350 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.194355 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.240847 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.241122 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.243560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.243628 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.243654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.243894 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.244325 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.244471 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.245964 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.246032 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.246058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.246361 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.246825 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.246957 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.247031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.247150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.247207 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.248555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.248605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.248623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.249049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.249144 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.249185 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.249529 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.249957 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.250081 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.251856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.251920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.251877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.251946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.251994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.252133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.252414 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.252587 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.252625 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254225 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254556 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.254596 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.255531 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.255590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.255613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.328773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.328825 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.328869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.328938 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329021 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329139 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329194 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329286 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329330 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329380 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.329404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.330241 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.330297 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.333346 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.408693 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.408786 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.417667 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.425842 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:28:54.237512396 +0000 UTC Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436367 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436464 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436506 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436576 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436643 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436716 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436744 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436812 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436840 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436756 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436953 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436962 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.436852 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437073 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437169 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437208 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437286 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437331 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437378 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437456 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437482 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437515 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437565 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437569 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437613 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437629 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.437747 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.545836 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.545929 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.595563 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.595789 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.597495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.597550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.597562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.597591 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.598515 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.614144 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.644502 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.661348 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: I0131 14:58:36.671058 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.714763 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.714880 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.785569 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-48814774d3e2764e302116d27098502dfcee124855f9a85488dac47591854425 WatchSource:0}: Error finding container 48814774d3e2764e302116d27098502dfcee124855f9a85488dac47591854425: Status 404 returned error can't find the container with id 48814774d3e2764e302116d27098502dfcee124855f9a85488dac47591854425 Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.794585 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c5200c501655b5d83ce8e33790906f4a7ba90deaaede4827508eb40888b77d1b WatchSource:0}: Error finding container c5200c501655b5d83ce8e33790906f4a7ba90deaaede4827508eb40888b77d1b: Status 404 returned error can't find the container with id c5200c501655b5d83ce8e33790906f4a7ba90deaaede4827508eb40888b77d1b Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.796891 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7fda755ca39f9578aedc908bd960af813ccf80c1ecff87e089ccf414723483d5 WatchSource:0}: Error finding container 7fda755ca39f9578aedc908bd960af813ccf80c1ecff87e089ccf414723483d5: Status 404 returned error can't find the container with id 7fda755ca39f9578aedc908bd960af813ccf80c1ecff87e089ccf414723483d5 Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.801674 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5e03086341494e5d74eecaad4acedfdc97cd59203b64fd24fc878242f03d43a3 WatchSource:0}: Error finding container 5e03086341494e5d74eecaad4acedfdc97cd59203b64fd24fc878242f03d43a3: Status 404 returned error can't find the container with id 5e03086341494e5d74eecaad4acedfdc97cd59203b64fd24fc878242f03d43a3 Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.820639 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2fb8b181fcf5a2e20a0d6a45281e223b13c36aab168251901890df5e9ff4e87e WatchSource:0}: Error finding container 2fb8b181fcf5a2e20a0d6a45281e223b13c36aab168251901890df5e9ff4e87e: Status 404 returned error can't find the container with id 2fb8b181fcf5a2e20a0d6a45281e223b13c36aab168251901890df5e9ff4e87e Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.830294 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Jan 31 14:58:36 crc kubenswrapper[4735]: W0131 14:58:36.955170 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:36 crc kubenswrapper[4735]: E0131 14:58:36.955289 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.390964 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:58:37 crc kubenswrapper[4735]: E0131 14:58:37.392502 4735 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.398972 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.401115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.401161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.401179 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.401214 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:37 crc kubenswrapper[4735]: E0131 14:58:37.401795 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.417161 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.426208 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:05:36.826892979 +0000 UTC Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.545640 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c5200c501655b5d83ce8e33790906f4a7ba90deaaede4827508eb40888b77d1b"} Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.547086 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"48814774d3e2764e302116d27098502dfcee124855f9a85488dac47591854425"} Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.548333 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e03086341494e5d74eecaad4acedfdc97cd59203b64fd24fc878242f03d43a3"} Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.549592 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fb8b181fcf5a2e20a0d6a45281e223b13c36aab168251901890df5e9ff4e87e"} Jan 31 14:58:37 crc kubenswrapper[4735]: I0131 14:58:37.550988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7fda755ca39f9578aedc908bd960af813ccf80c1ecff87e089ccf414723483d5"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.417362 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.426464 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:06:19.910850806 +0000 UTC Jan 31 14:58:38 crc kubenswrapper[4735]: W0131 14:58:38.426620 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:38 crc kubenswrapper[4735]: E0131 14:58:38.426740 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:38 crc kubenswrapper[4735]: E0131 14:58:38.432102 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="3.2s" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.557058 4735 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a" exitCode=0 Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.557146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.557202 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.558585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.558618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.558630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.558806 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.561102 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd" exitCode=0 Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.561187 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.561245 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.562640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.562674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.562685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.564705 4735 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218" exitCode=0 Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.564804 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.564811 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.565926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.565977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.565992 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.566163 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.566919 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.566959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.566977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.568280 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d" exitCode=0 Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.568342 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d"} Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.568462 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.569378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.569410 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:38 crc kubenswrapper[4735]: I0131 14:58:38.569450 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:38 crc kubenswrapper[4735]: W0131 14:58:38.644706 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:38 crc kubenswrapper[4735]: E0131 14:58:38.644795 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.001922 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.007355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.007405 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.007414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.007493 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:39 crc kubenswrapper[4735]: E0131 14:58:39.008093 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.241:6443: connect: connection refused" node="crc" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.417104 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.426949 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:17:31.711669287 +0000 UTC Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.574404 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.574373 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2381a88b1e6c4cfa839c8a6dc1592af9e494f34165ec26535d3bb7e92b1d7761"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.575315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.575359 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.575373 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.578212 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.578254 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.578267 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.578351 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.579250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.579285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.579298 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.581713 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.581741 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.581755 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.584759 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.584789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.584802 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.584929 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.587277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.587331 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.587355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.588986 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3" exitCode=0 Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.589065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3"} Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.589198 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.590850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.590880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:39 crc kubenswrapper[4735]: I0131 14:58:39.590893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:39 crc kubenswrapper[4735]: W0131 14:58:39.599887 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:39 crc kubenswrapper[4735]: E0131 14:58:39.600081 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.030819 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:40 crc kubenswrapper[4735]: W0131 14:58:40.150409 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.241:6443: connect: connection refused Jan 31 14:58:40 crc kubenswrapper[4735]: E0131 14:58:40.150530 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.241:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.428047 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:12:06.442181674 +0000 UTC Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.597659 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a"} Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.597738 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee1dbcbecd6d021d1ceef801d0b6b4397dd43161068132f2f3d4fe184fae3ce2"} Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.597790 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.599342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.599403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.599456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.600591 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a" exitCode=0 Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.600803 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.600836 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.600887 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.600989 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.601056 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.602110 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a"} Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603846 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.604036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:40 crc kubenswrapper[4735]: I0131 14:58:40.603949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.260862 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.269178 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.429200 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:28:11.636875723 +0000 UTC Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.491616 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.541496 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609779 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512"} Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609862 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6"} Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609913 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be"} Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609944 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609992 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.609943 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611791 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:41 crc kubenswrapper[4735]: I0131 14:58:41.611879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.208754 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.216149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.216245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.216271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.216331 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.429376 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:26:31.685792409 +0000 UTC Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.622049 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b"} Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.622125 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.622156 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b"} Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.622190 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.622351 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.623288 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624100 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624881 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:42 crc kubenswrapper[4735]: I0131 14:58:42.624935 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.031713 4735 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.031832 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.429784 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:00:40.663743233 +0000 UTC Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.625838 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.627229 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.627291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.627319 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.758889 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.759207 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.760907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.760987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:43 crc kubenswrapper[4735]: I0131 14:58:43.761012 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.198655 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.430132 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:43:30.037007101 +0000 UTC Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.580281 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.580579 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.582386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.582481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.582506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.628925 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.630366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.630456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:44 crc kubenswrapper[4735]: I0131 14:58:44.630478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.196486 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.196718 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.198668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.198714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.198732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:45 crc kubenswrapper[4735]: I0131 14:58:45.431260 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:59:15.061890843 +0000 UTC Jan 31 14:58:45 crc kubenswrapper[4735]: E0131 14:58:45.898987 4735 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.338488 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.338802 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.340577 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.340658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.340672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.431724 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:00:03.589810148 +0000 UTC Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.772608 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.772881 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.774403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.774537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:46 crc kubenswrapper[4735]: I0131 14:58:46.774565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:47 crc kubenswrapper[4735]: I0131 14:58:47.449183 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:19:31.214625843 +0000 UTC Jan 31 14:58:48 crc kubenswrapper[4735]: I0131 14:58:48.449850 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:36:57.720037391 +0000 UTC Jan 31 14:58:49 crc kubenswrapper[4735]: I0131 14:58:49.450934 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:28:21.739288039 +0000 UTC Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.418079 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.451704 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:29:15.289696137 +0000 UTC Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.899564 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.899641 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.904106 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:58:50 crc kubenswrapper[4735]: I0131 14:58:50.904197 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.452416 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:26:01.440709091 +0000 UTC Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.550845 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]log ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]etcd ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-apiextensions-informers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/crd-informer-synced ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 14:58:51 crc kubenswrapper[4735]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/bootstrap-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-registration-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]autoregister-completion ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 14:58:51 crc kubenswrapper[4735]: livez check failed Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.551122 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.653409 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.655952 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee1dbcbecd6d021d1ceef801d0b6b4397dd43161068132f2f3d4fe184fae3ce2" exitCode=255 Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.656007 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ee1dbcbecd6d021d1ceef801d0b6b4397dd43161068132f2f3d4fe184fae3ce2"} Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.656200 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.657142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.657180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.657193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:51 crc kubenswrapper[4735]: I0131 14:58:51.657706 4735 scope.go:117] "RemoveContainer" containerID="ee1dbcbecd6d021d1ceef801d0b6b4397dd43161068132f2f3d4fe184fae3ce2" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.452886 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:26:43.598723462 +0000 UTC Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.593640 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.662223 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.664877 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79"} Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.665062 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.666392 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.666527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:52 crc kubenswrapper[4735]: I0131 14:58:52.666561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.032538 4735 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.032669 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.453631 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:06:40.940144585 +0000 UTC Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.671653 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.675499 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.679174 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" exitCode=255 Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.679244 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79"} Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.679366 4735 scope.go:117] "RemoveContainer" containerID="ee1dbcbecd6d021d1ceef801d0b6b4397dd43161068132f2f3d4fe184fae3ce2" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.679549 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.681061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.681128 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.681155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:53 crc kubenswrapper[4735]: I0131 14:58:53.682528 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:53 crc kubenswrapper[4735]: E0131 14:58:53.682899 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.454883 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:45:05.182734909 +0000 UTC Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.685122 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.688480 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.689813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.689904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.689935 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:54 crc kubenswrapper[4735]: I0131 14:58:54.691263 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:54 crc kubenswrapper[4735]: E0131 14:58:54.691709 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.204950 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.205136 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.206633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.206717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.206746 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.455665 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:14:58.897338745 +0000 UTC Jan 31 14:58:55 crc kubenswrapper[4735]: E0131 14:58:55.886456 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.888525 4735 trace.go:236] Trace[1965995319]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:58:43.191) (total time: 12696ms): Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[1965995319]: ---"Objects listed" error: 12696ms (14:58:55.888) Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[1965995319]: [12.696470596s] [12.696470596s] END Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.888557 4735 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.889235 4735 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.890085 4735 trace.go:236] Trace[1660544388]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:58:42.319) (total time: 13570ms): Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[1660544388]: ---"Objects listed" error: 13570ms (14:58:55.889) Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[1660544388]: [13.570066459s] [13.570066459s] END Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.890122 4735 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:58:55 crc kubenswrapper[4735]: E0131 14:58:55.890748 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.891117 4735 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.892207 4735 trace.go:236] Trace[635776519]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:58:43.569) (total time: 12322ms): Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[635776519]: ---"Objects listed" error: 12322ms (14:58:55.892) Jan 31 14:58:55 crc kubenswrapper[4735]: Trace[635776519]: [12.322294283s] [12.322294283s] END Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.892341 4735 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.900349 4735 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.919136 4735 csr.go:261] certificate signing request csr-d59tr is approved, waiting to be issued Jan 31 14:58:55 crc kubenswrapper[4735]: I0131 14:58:55.924795 4735 csr.go:257] certificate signing request csr-d59tr is issued Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.365158 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.380453 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.454166 4735 apiserver.go:52] "Watching apiserver" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.456362 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:10:30.656642433 +0000 UTC Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.458475 4735 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.458964 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.459429 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.459542 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.459601 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.459755 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.459865 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.459974 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.460239 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.460261 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.460282 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.463950 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464009 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464042 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.463962 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464339 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464370 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464594 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.464609 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.526536 4735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.546479 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.548972 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.552727 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.561933 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.562298 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.563210 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.564850 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.576143 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.586103 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595453 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595497 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595540 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595559 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595577 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595596 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595614 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.595980 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596164 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596286 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596398 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596447 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596509 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596528 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596717 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596827 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.596970 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597023 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597043 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597247 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597060 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597301 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597324 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597298 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597507 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597541 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597559 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597577 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597592 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597611 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597627 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597644 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597662 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597679 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597718 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597735 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597758 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597775 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597785 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597799 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597816 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597833 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597849 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.597971 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598012 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598034 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598057 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598078 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598104 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598123 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598139 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598175 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598192 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598207 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598213 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598221 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598223 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598263 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598293 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598319 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598343 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598395 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598438 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598466 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598491 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598514 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598540 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598566 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598592 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598616 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598642 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598668 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598700 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598724 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598746 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598769 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598789 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598811 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598833 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598857 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598880 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598924 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598947 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598969 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598997 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599022 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599045 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599070 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599098 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599124 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599148 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599173 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599197 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599222 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599247 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599275 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599300 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599324 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599357 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599381 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599404 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599458 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599550 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599580 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599608 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599632 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599656 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599679 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599706 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599729 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599753 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599778 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599825 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599847 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599869 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599953 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599976 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600000 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600024 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600119 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600146 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600171 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600224 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600249 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600273 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600298 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598616 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598639 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598788 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598846 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.598934 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600373 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599074 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599207 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599220 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599362 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599367 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599571 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.599739 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600142 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600312 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600587 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600852 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600578 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600931 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601141 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601150 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601167 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601170 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601236 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601249 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601354 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601360 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601368 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601448 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601481 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601544 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601618 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601622 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601677 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601744 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.601852 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602049 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602051 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602071 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602377 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602614 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602505 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602850 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.602893 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603084 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603089 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603199 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603241 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603409 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603461 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603468 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603608 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603655 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603776 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603788 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603957 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.603995 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.604165 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.604178 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.604339 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.604381 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.604582 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.605605 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.606080 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.606397 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.607117 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.607153 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.607342 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.607724 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.607753 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.608325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.608867 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.608944 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.608977 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609154 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609332 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609390 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609548 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609775 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609800 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609863 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.610201 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.610249 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.609536 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.610543 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:58:57.110515385 +0000 UTC m=+22.883844427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.610817 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611161 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.600325 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611273 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611479 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611592 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611627 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611734 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611766 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611795 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611826 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611848 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611856 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611923 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611944 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612015 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612104 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612131 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612177 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612204 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612234 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612256 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612299 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612332 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612357 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612380 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612412 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612482 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612512 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612540 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612570 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612595 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612620 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612647 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612670 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612700 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612724 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612752 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612774 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612801 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612828 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612848 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612872 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612947 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612978 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613005 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613062 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613092 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613117 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613136 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.611920 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612138 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612468 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612499 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.612931 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613118 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613338 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613410 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613511 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613625 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613766 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.613813 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.614110 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.614144 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616148 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616208 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616243 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616279 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616311 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616340 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616371 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616403 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616444 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616475 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616510 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.616544 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.618530 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.618621 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.614165 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.618664 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621851 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621735 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.614473 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.614479 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.618228 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.617907 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.620038 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.620186 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.620215 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.620314 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.620509 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621208 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621241 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621466 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621476 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621611 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621709 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621730 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621826 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.621989 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622221 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622349 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622373 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622718 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622731 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.622814 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623224 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623250 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623275 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623302 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623328 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623353 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623381 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623442 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623472 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623496 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623524 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623553 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623583 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623611 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623646 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623714 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623745 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623770 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623793 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623823 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623847 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623867 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623891 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623919 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623940 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623965 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623986 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624015 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624184 4735 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624200 4735 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624213 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624227 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624239 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624249 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624262 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624274 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624286 4735 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624296 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624307 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624318 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624330 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624340 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624350 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624372 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624382 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624396 4735 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624406 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624430 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624440 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624450 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624461 4735 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624472 4735 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624483 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624497 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624509 4735 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624521 4735 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624531 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624542 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624553 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624563 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624573 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624583 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624593 4735 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624615 4735 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624625 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624634 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624644 4735 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624654 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624665 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624677 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624689 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624700 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624711 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624722 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624731 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624741 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624751 4735 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624761 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624770 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624782 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624792 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624805 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624815 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624825 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624835 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624844 4735 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624854 4735 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624865 4735 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624874 4735 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624893 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624904 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624915 4735 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624926 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624937 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624947 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624957 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624970 4735 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624980 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624990 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624999 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625009 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625019 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625029 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625041 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625051 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625062 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625084 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625096 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625106 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625116 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625127 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625138 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625148 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625158 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625168 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625177 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625188 4735 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625203 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625213 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625230 4735 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625240 4735 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625250 4735 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625262 4735 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625273 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625284 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625294 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625303 4735 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625313 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625322 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625332 4735 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625341 4735 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625351 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625360 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625370 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625379 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625398 4735 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625409 4735 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625431 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625440 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625449 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625459 4735 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625468 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625478 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625489 4735 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625498 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625507 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625515 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625525 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625534 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625544 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625556 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625567 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625576 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625585 4735 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625594 4735 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625603 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625612 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625621 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625630 4735 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625640 4735 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625650 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625660 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625671 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625681 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625691 4735 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625699 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625708 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625717 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625726 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625746 4735 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625756 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625766 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625775 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625784 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625793 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625802 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.628554 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631391 4735 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631668 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631463 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623351 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623602 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.623878 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624208 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624462 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.624690 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625248 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625337 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625584 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.625826 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.626132 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.627122 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.628173 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.629045 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.629357 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.629733 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.630127 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.630146 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.630743 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.630996 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631200 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631480 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631894 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631998 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.631994 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.632266 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.632474 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.632795 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.632816 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:57.132783468 +0000 UTC m=+22.906112510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.632831 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.632875 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.632911 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.633049 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:57.133028945 +0000 UTC m=+22.906357987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.633537 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.633941 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.634576 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.638278 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.638675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.639474 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.646008 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646811 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646844 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646836 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646865 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646882 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646899 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646951 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:57.146915073 +0000 UTC m=+22.920244105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.646973 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:57.146964725 +0000 UTC m=+22.920293767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.647208 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.650973 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.651409 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.651983 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.652348 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.653172 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.654838 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.655035 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.655283 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.657531 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.659321 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.661948 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.670544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.670686 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.670873 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.671902 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.678763 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.686642 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.689528 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.700461 4735 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.700773 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.700956 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.701148 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: E0131 14:58:56.701405 4735 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.709731 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.718691 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726239 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726293 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726344 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726356 4735 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726366 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726376 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726385 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726397 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726408 4735 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726443 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726457 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726482 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726470 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726583 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726621 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726648 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726693 4735 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726710 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726726 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726742 4735 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726775 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726792 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726812 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726827 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726864 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726880 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726894 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726908 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726947 4735 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726960 4735 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.726976 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727014 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727032 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727050 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727065 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727098 4735 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727112 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727127 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727142 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727177 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727194 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727212 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727228 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727261 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727275 4735 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727291 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727304 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727340 4735 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.727355 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.731819 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.744772 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.756540 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.766205 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.773336 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.780811 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.787173 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:58:56 crc kubenswrapper[4735]: W0131 14:58:56.801681 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-073f15f65dfe4c0abe880326deb0a66de711f6fcba325080e808473cab7e285f WatchSource:0}: Error finding container 073f15f65dfe4c0abe880326deb0a66de711f6fcba325080e808473cab7e285f: Status 404 returned error can't find the container with id 073f15f65dfe4c0abe880326deb0a66de711f6fcba325080e808473cab7e285f Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.925818 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 14:53:55 +0000 UTC, rotation deadline is 2026-11-19 03:46:11.321702446 +0000 UTC Jan 31 14:58:56 crc kubenswrapper[4735]: I0131 14:58:56.925927 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6996h47m14.395778863s for next certificate rotation Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.132737 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.132829 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.132923 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:58:58.132893729 +0000 UTC m=+23.906222781 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.132928 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.133004 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:58.132989992 +0000 UTC m=+23.906319044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.233623 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.233678 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.233724 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233848 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233848 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233871 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233882 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233888 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233891 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233962 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:58.233940645 +0000 UTC m=+24.007269697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.233983 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:58.233974766 +0000 UTC m=+24.007303818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.234048 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.234237 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:58.234197962 +0000 UTC m=+24.007527034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.456519 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:34:37.302014818 +0000 UTC Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.539235 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.539479 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.543972 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.545262 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.547316 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.548312 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.548950 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.549497 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.550260 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.550885 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.551515 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.552203 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.552772 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.553533 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.555805 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.556756 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.557326 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.557908 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.558553 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.558992 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.559629 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.560247 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.563043 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.563864 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.564865 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.565565 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.565977 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.567049 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.568201 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.568724 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.569292 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.570217 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.570835 4735 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.570943 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.574156 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.574672 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.575083 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.577732 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.578444 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.580154 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.581477 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.582168 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.583111 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.583721 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.585068 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.586296 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.586828 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.587770 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.588373 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.589967 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.590999 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.591559 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.592367 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.592899 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.593937 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.594489 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.640762 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gwdl8"] Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.641082 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.648403 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.650054 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.653151 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.682804 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.698610 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.698687 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.698704 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"073f15f65dfe4c0abe880326deb0a66de711f6fcba325080e808473cab7e285f"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.700433 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.700498 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"255df88934a04e9f0ee647b064f3561c29f0f9012eb4f71dd71d8f73e9bfd8c4"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.702299 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a048cc27bc0a9fce18a493c0a1e14278a2d51bdf6b4c2c586d81222bbd8a89d9"} Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.703648 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:57 crc kubenswrapper[4735]: E0131 14:58:57.703836 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.709750 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.736174 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.737039 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.773960 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.810197 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.838244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2h6v\" (UniqueName: \"kubernetes.io/projected/3527f2eb-24cf-4d43-911b-dfbd7afba999-kube-api-access-g2h6v\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.838338 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3527f2eb-24cf-4d43-911b-dfbd7afba999-hosts-file\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.846402 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.891971 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.924090 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.939042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3527f2eb-24cf-4d43-911b-dfbd7afba999-hosts-file\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.939107 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2h6v\" (UniqueName: \"kubernetes.io/projected/3527f2eb-24cf-4d43-911b-dfbd7afba999-kube-api-access-g2h6v\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.939252 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3527f2eb-24cf-4d43-911b-dfbd7afba999-hosts-file\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.962110 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.974570 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2h6v\" (UniqueName: \"kubernetes.io/projected/3527f2eb-24cf-4d43-911b-dfbd7afba999-kube-api-access-g2h6v\") pod \"node-resolver-gwdl8\" (UID: \"3527f2eb-24cf-4d43-911b-dfbd7afba999\") " pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:57 crc kubenswrapper[4735]: I0131 14:58:57.985222 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.001521 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.017066 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.036131 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.046535 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.061744 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.092416 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.104351 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.115547 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.140629 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.140750 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.140887 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:59:00.140849446 +0000 UTC m=+25.914178488 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.140966 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.141047 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:00.141026211 +0000 UTC m=+25.914355253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.241526 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.241585 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.241608 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241716 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241735 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241753 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241766 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241787 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:00.241767338 +0000 UTC m=+26.015096380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241803 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:00.241793709 +0000 UTC m=+26.015122751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241853 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241864 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241871 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.241892 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:00.241885932 +0000 UTC m=+26.015214974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.269186 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gwdl8" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.457653 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:28:46.173433558 +0000 UTC Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.472673 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gq77t"] Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.473157 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.475007 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.475254 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.475321 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.475527 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.475529 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.476035 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hg7gl"] Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.476442 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.477697 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.480107 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.480184 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.480275 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.480852 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c6zv"] Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.481677 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ck7n9"] Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.481943 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.481956 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.483127 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.483500 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.483693 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.484586 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.486810 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.486825 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.486937 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.491316 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.491556 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.491712 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.505576 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.518545 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.532304 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.539861 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.540041 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.540522 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.540599 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.543631 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.557608 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.577559 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.590290 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.603313 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.616128 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.625987 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645147 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/582442e0-b079-476d-849d-a4902306aba0-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645205 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-cni-binary-copy\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645231 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-system-cni-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645253 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645280 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645354 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-system-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645382 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-multus\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645462 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645491 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6td7\" (UniqueName: \"kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-conf-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645526 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-daemon-config\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645542 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-binary-copy\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645574 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645595 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-etc-kubernetes\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645609 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/582442e0-b079-476d-849d-a4902306aba0-rootfs\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645684 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645701 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645716 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645734 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645755 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/582442e0-b079-476d-849d-a4902306aba0-proxy-tls\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645903 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-k8s-cni-cncf-io\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645922 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnwc\" (UniqueName: \"kubernetes.io/projected/671e4f66-1c2f-436a-800d-fd3840e9830d-kube-api-access-qbnwc\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645937 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645962 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.645984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtsf\" (UniqueName: \"kubernetes.io/projected/28589772-80ef-4a88-9b68-eb15b241ef7f-kube-api-access-qrtsf\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646003 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqkr\" (UniqueName: \"kubernetes.io/projected/582442e0-b079-476d-849d-a4902306aba0-kube-api-access-dwqkr\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-kubelet\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646036 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646051 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-os-release\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646067 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646085 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-netns\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646103 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-bin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646125 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646140 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-socket-dir-parent\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646156 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-os-release\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646171 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646185 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646218 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-cnibin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646276 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-hostroot\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646293 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-multus-certs\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646309 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-cnibin\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646323 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.646396 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.649605 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.666869 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.686152 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.705669 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gwdl8" event={"ID":"3527f2eb-24cf-4d43-911b-dfbd7afba999","Type":"ContainerStarted","Data":"7ce8968dc2f370a05f50178ff818a705a0ac0dfff761140037bb89fdaaed2252"} Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.706716 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:58:58 crc kubenswrapper[4735]: E0131 14:58:58.706986 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.718196 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.739896 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-cnibin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747883 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-hostroot\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747909 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-multus-certs\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747931 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-cnibin\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747951 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747970 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.747991 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748013 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/582442e0-b079-476d-849d-a4902306aba0-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748032 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-cni-binary-copy\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748040 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-cnibin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748095 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748097 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748130 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-system-cni-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748063 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-cnibin\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-system-cni-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748328 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748347 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-hostroot\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748358 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748453 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748402 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-system-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748480 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-system-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748523 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-multus\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748545 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748584 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-multus\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6td7\" (UniqueName: \"kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748639 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-conf-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748670 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-daemon-config\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748689 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-binary-copy\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748690 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-conf-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748708 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749537 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748727 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749594 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-etc-kubernetes\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748919 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-multus-certs\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749395 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-binary-copy\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749623 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749453 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-daemon-config\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749645 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749683 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/582442e0-b079-476d-849d-a4902306aba0-rootfs\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749704 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-etc-kubernetes\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749717 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749745 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749771 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749821 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749845 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/582442e0-b079-476d-849d-a4902306aba0-proxy-tls\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749866 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28589772-80ef-4a88-9b68-eb15b241ef7f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749925 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-k8s-cni-cncf-io\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748948 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/671e4f66-1c2f-436a-800d-fd3840e9830d-cni-binary-copy\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749941 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749885 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-k8s-cni-cncf-io\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749989 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbnwc\" (UniqueName: \"kubernetes.io/projected/671e4f66-1c2f-436a-800d-fd3840e9830d-kube-api-access-qbnwc\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749999 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750009 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.748891 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/582442e0-b079-476d-849d-a4902306aba0-mcd-auth-proxy-config\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750019 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750037 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750059 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrtsf\" (UniqueName: \"kubernetes.io/projected/28589772-80ef-4a88-9b68-eb15b241ef7f-kube-api-access-qrtsf\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750068 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwqkr\" (UniqueName: \"kubernetes.io/projected/582442e0-b079-476d-849d-a4902306aba0-kube-api-access-dwqkr\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750111 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-kubelet\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750128 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750150 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-os-release\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750162 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750169 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750193 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-netns\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750235 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-bin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750262 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750277 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-socket-dir-parent\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750293 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-os-release\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750310 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750332 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.749987 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/582442e0-b079-476d-849d-a4902306aba0-rootfs\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750559 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-socket-dir-parent\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-multus-cni-dir\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750597 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-run-netns\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750595 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750632 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750665 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-kubelet\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750636 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-host-var-lib-cni-bin\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750675 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750716 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750832 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750904 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-os-release\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.750981 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/671e4f66-1c2f-436a-800d-fd3840e9830d-os-release\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.754773 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28589772-80ef-4a88-9b68-eb15b241ef7f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.760977 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/582442e0-b079-476d-849d-a4902306aba0-proxy-tls\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.767277 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6td7\" (UniqueName: \"kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.771016 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwqkr\" (UniqueName: \"kubernetes.io/projected/582442e0-b079-476d-849d-a4902306aba0-kube-api-access-dwqkr\") pod \"machine-config-daemon-gq77t\" (UID: \"582442e0-b079-476d-849d-a4902306aba0\") " pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.773180 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert\") pod \"ovnkube-node-2c6zv\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.773621 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbnwc\" (UniqueName: \"kubernetes.io/projected/671e4f66-1c2f-436a-800d-fd3840e9830d-kube-api-access-qbnwc\") pod \"multus-hg7gl\" (UID: \"671e4f66-1c2f-436a-800d-fd3840e9830d\") " pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.775875 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrtsf\" (UniqueName: \"kubernetes.io/projected/28589772-80ef-4a88-9b68-eb15b241ef7f-kube-api-access-qrtsf\") pod \"multus-additional-cni-plugins-ck7n9\" (UID: \"28589772-80ef-4a88-9b68-eb15b241ef7f\") " pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.786003 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.786598 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.795528 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hg7gl" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.801274 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:58:58 crc kubenswrapper[4735]: W0131 14:58:58.802413 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582442e0_b079_476d_849d_a4902306aba0.slice/crio-4df5e5515963b9b79ee53b55501fac0ecec2bb5b45b5ce64fc5f5791823a14c4 WatchSource:0}: Error finding container 4df5e5515963b9b79ee53b55501fac0ecec2bb5b45b5ce64fc5f5791823a14c4: Status 404 returned error can't find the container with id 4df5e5515963b9b79ee53b55501fac0ecec2bb5b45b5ce64fc5f5791823a14c4 Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.803812 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.808704 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.826383 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.845449 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.867752 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.886126 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.902278 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:58 crc kubenswrapper[4735]: I0131 14:58:58.924438 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.476949 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:28:02.651567506 +0000 UTC Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.539328 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:58:59 crc kubenswrapper[4735]: E0131 14:58:59.539479 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.711006 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerStarted","Data":"913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.711054 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerStarted","Data":"3ff0478153c6775cf8459040af5a5a5c2a420a59afe1b29328bdf167f1bb954a"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.714109 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" exitCode=0 Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.714176 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.714267 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"e277af77be4f660fa8c7949f4f990ae5ef00f5b2fa458e089bc6f62f9a09801f"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.716523 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerStarted","Data":"f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.716603 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerStarted","Data":"21e2d0e67277fe54fef15f2af2caa5b4d4887f64751644053ebc1f9eab431f8e"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.718841 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.718908 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.718934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"4df5e5515963b9b79ee53b55501fac0ecec2bb5b45b5ce64fc5f5791823a14c4"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.720542 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.721909 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gwdl8" event={"ID":"3527f2eb-24cf-4d43-911b-dfbd7afba999","Type":"ContainerStarted","Data":"9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b"} Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.737717 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.752216 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.765756 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.783150 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.797042 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.811721 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.827562 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.842417 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.863806 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.903056 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.934943 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.963348 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:58:59 crc kubenswrapper[4735]: I0131 14:58:59.989330 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:58:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.011287 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.025763 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.037002 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.039301 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.042151 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.046988 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.055300 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.073608 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.092409 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.104286 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.116852 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.137530 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.155133 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.171799 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.183553 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.183704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.183835 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:59:04.183796275 +0000 UTC m=+29.957125317 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.183924 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.184024 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:04.183980171 +0000 UTC m=+29.957309413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.188487 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.204380 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.219938 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.238411 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.252766 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.271239 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.284276 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.284320 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.284350 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284535 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284555 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284568 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284574 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284629 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284656 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284660 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284633 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:04.284618925 +0000 UTC m=+30.057947967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284770 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:04.284745459 +0000 UTC m=+30.058074541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.284797 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:04.28478422 +0000 UTC m=+30.058113302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.286140 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.307686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.319898 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.333904 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.352031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.371219 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.390965 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.408769 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.426731 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.443168 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.477178 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:55:32.398998326 +0000 UTC Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.539780 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.539950 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.540295 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:00 crc kubenswrapper[4735]: E0131 14:59:00.540470 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.725412 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6" exitCode=0 Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.725488 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.732700 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.732753 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.732765 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.732775 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.732784 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.750160 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.771011 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.786091 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.800416 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.815879 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.838567 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.852871 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.871609 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.888861 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.905026 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.917378 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.931123 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.942936 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:00 crc kubenswrapper[4735]: I0131 14:59:00.955060 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.478065 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:21:40.89732088 +0000 UTC Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.524654 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-q9r6v"] Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.525086 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: W0131 14:59:01.527991 4735 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 14:59:01 crc kubenswrapper[4735]: E0131 14:59:01.528036 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:01 crc kubenswrapper[4735]: W0131 14:59:01.528173 4735 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: secrets "node-ca-dockercfg-4777p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 14:59:01 crc kubenswrapper[4735]: E0131 14:59:01.528197 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-ca-dockercfg-4777p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:01 crc kubenswrapper[4735]: W0131 14:59:01.528248 4735 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: configmaps "image-registry-certificates" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 14:59:01 crc kubenswrapper[4735]: E0131 14:59:01.528281 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-registry-certificates\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:01 crc kubenswrapper[4735]: W0131 14:59:01.530351 4735 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 14:59:01 crc kubenswrapper[4735]: E0131 14:59:01.530383 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.542233 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:01 crc kubenswrapper[4735]: E0131 14:59:01.542362 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.570984 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.598401 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.598752 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg47j\" (UniqueName: \"kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.598812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a5d9418-4802-43d4-947e-6896f68d8a68-host\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.615580 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.630602 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.642676 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.660506 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.687496 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.699872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a5d9418-4802-43d4-947e-6896f68d8a68-host\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.699969 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.700043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg47j\" (UniqueName: \"kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.700076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a5d9418-4802-43d4-947e-6896f68d8a68-host\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.703475 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.721959 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.739562 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerStarted","Data":"fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff"} Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.743367 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.753722 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.770077 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.785048 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.798483 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.812210 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.825943 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.838448 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.853328 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.866594 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.888130 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.908018 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.922883 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.938277 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.953283 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.971120 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:01 crc kubenswrapper[4735]: I0131 14:59:01.985606 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.004826 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.016464 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.028693 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.040971 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.052562 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.063277 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.291295 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.293907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.293989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.294009 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.294259 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.302657 4735 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.303106 4735 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.304693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.304745 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.304759 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.304787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.304804 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.323969 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.328171 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.328216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.328230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.328246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.328259 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.334865 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.342283 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.346158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.346224 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.346245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.346277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.346301 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.362103 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.366615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.366696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.366718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.366749 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.366769 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.380335 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.384690 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.384751 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.384768 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.384792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.384807 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.399250 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.399519 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.402038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.402091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.402111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.402137 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.402158 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.478946 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:43:33.950030812 +0000 UTC Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.505230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.505303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.505361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.505589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.505631 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.539293 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.539293 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.539595 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.540014 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.594278 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.595120 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.595308 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.607977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.608027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.608042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.608065 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.608083 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.700998 4735 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: failed to sync configmap cache: timed out waiting for the condition Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.701141 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca podName:8a5d9418-4802-43d4-947e-6896f68d8a68 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:03.201107319 +0000 UTC m=+28.974436361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca") pod "node-ca-q9r6v" (UID: "8a5d9418-4802-43d4-947e-6896f68d8a68") : failed to sync configmap cache: timed out waiting for the condition Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.711405 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.711471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.711482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.711501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.711517 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.717570 4735 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.717632 4735 projected.go:194] Error preparing data for projected volume kube-api-access-rg47j for pod openshift-image-registry/node-ca-q9r6v: failed to sync configmap cache: timed out waiting for the condition Jan 31 14:59:02 crc kubenswrapper[4735]: E0131 14:59:02.717717 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j podName:8a5d9418-4802-43d4-947e-6896f68d8a68 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:03.217691725 +0000 UTC m=+28.991020767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg47j" (UniqueName: "kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j") pod "node-ca-q9r6v" (UID: "8a5d9418-4802-43d4-947e-6896f68d8a68") : failed to sync configmap cache: timed out waiting for the condition Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.750541 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff" exitCode=0 Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.750630 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.767564 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.785232 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.800031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.814050 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.814082 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.814091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.814118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.814129 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.820503 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.824325 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.838243 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.864161 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.877381 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.894101 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.911544 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.917500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.917581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.917594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.917617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.917634 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:02Z","lastTransitionTime":"2026-01-31T14:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.924476 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.933977 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.954622 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.967941 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:02 crc kubenswrapper[4735]: I0131 14:59:02.985355 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.005042 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.022812 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.023883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.023987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.024048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.024129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.024194 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.029313 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.128798 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.129181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.129194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.129211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.129221 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.214379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.216414 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a5d9418-4802-43d4-947e-6896f68d8a68-serviceca\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.232972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.233051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.233070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.233650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.233717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.315997 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg47j\" (UniqueName: \"kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.326609 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg47j\" (UniqueName: \"kubernetes.io/projected/8a5d9418-4802-43d4-947e-6896f68d8a68-kube-api-access-rg47j\") pod \"node-ca-q9r6v\" (UID: \"8a5d9418-4802-43d4-947e-6896f68d8a68\") " pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.337217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.337263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.337280 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.337303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.337321 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.403256 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q9r6v" Jan 31 14:59:03 crc kubenswrapper[4735]: W0131 14:59:03.428591 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a5d9418_4802_43d4_947e_6896f68d8a68.slice/crio-1112873089e502588da49c74f38a75d844fcaa723ea571f6158d03cc82468281 WatchSource:0}: Error finding container 1112873089e502588da49c74f38a75d844fcaa723ea571f6158d03cc82468281: Status 404 returned error can't find the container with id 1112873089e502588da49c74f38a75d844fcaa723ea571f6158d03cc82468281 Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.440136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.440201 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.440222 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.440250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.440268 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.479365 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:58:41.888749991 +0000 UTC Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.540202 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:03 crc kubenswrapper[4735]: E0131 14:59:03.540469 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.549845 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.549973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.549991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.550010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.550050 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.652979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.653023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.653040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.653058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.653072 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.756051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.756108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.756123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.756146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.756162 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.760780 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3" exitCode=0 Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.760859 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.766347 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q9r6v" event={"ID":"8a5d9418-4802-43d4-947e-6896f68d8a68","Type":"ContainerStarted","Data":"6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.766419 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q9r6v" event={"ID":"8a5d9418-4802-43d4-947e-6896f68d8a68","Type":"ContainerStarted","Data":"1112873089e502588da49c74f38a75d844fcaa723ea571f6158d03cc82468281"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.772626 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.801871 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.822850 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.845555 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.862402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.862469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.862482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.862501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.862515 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.876718 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.900617 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.919523 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.936675 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.952301 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.964890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.964934 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.964947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.964965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.964978 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:03Z","lastTransitionTime":"2026-01-31T14:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.968844 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:03 crc kubenswrapper[4735]: I0131 14:59:03.986458 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.007555 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.028460 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.042881 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.054531 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068736 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.068882 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.082984 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.094968 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.106438 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.120493 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.135868 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.151511 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.168593 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.173130 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.173176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.173191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.173214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.173229 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.188005 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.212814 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.232620 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.232840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.233079 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.233193 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.233171751 +0000 UTC m=+38.006500793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.233512 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.233415039 +0000 UTC m=+38.006744131 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.238783 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.272741 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.275827 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.275989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.276136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.276284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.276399 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.294205 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.313336 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.330189 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.333753 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.333807 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.333848 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334017 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334053 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334075 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334089 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334096 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334108 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334176 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.334157436 +0000 UTC m=+38.107486488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334202 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.334191857 +0000 UTC m=+38.107520919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334574 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.334753 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.334727843 +0000 UTC m=+38.108056895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.351935 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.379825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.380145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.380406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.380571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.380670 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.480033 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:30:59.516250534 +0000 UTC Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.483544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.483610 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.483628 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.483655 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.483676 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.539837 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.539892 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.540368 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:04 crc kubenswrapper[4735]: E0131 14:59:04.540556 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.586550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.586610 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.586622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.586639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.586654 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.688558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.688633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.688649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.688674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.688691 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.781027 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2" exitCode=0 Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.781095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.791675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.791735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.791755 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.791778 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.791797 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.807363 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.829494 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.845236 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.859641 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.878978 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.894760 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.894829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.894849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.894877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.894900 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.904227 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.936176 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.957633 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.981852 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.999538 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.999734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.999766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:04 crc kubenswrapper[4735]: I0131 14:59:04.999855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:04.999941 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:04Z","lastTransitionTime":"2026-01-31T14:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.021245 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.042459 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.067066 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.090981 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.104568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.104645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.104916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.104959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.104973 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.106641 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.121221 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.142879 4735 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.245831 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.245879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.245890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.245907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.245922 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.348264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.348460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.348549 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.348632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.348700 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.452172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.452199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.452208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.452223 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.452231 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.480771 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:29:53.573874936 +0000 UTC Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.539447 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:05 crc kubenswrapper[4735]: E0131 14:59:05.539611 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.555496 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.555555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.555572 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.555596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.555618 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.557221 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.573229 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.595037 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.614842 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.632264 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.645681 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.658817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.658884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.658906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.658935 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.658954 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.665813 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.694663 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.713530 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.733115 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.757687 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.761970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.762120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.762210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.762292 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.762349 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.777180 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.789026 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.789668 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.789736 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.792318 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerStarted","Data":"b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.795033 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.812206 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.827073 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.844455 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.851099 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.854316 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.861740 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.864939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.865011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.865035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.865068 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.865091 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.884632 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.904703 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.918354 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.933013 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.953268 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.968806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.968887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.968908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.968938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.968958 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:05Z","lastTransitionTime":"2026-01-31T14:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.975832 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:05 crc kubenswrapper[4735]: I0131 14:59:05.991524 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.008625 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.046989 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.069531 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.072737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.072775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.073117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.073172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.073192 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.086649 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.109856 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.127634 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.142711 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.157622 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.169163 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.175966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.176112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.176200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.176281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.176363 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.183935 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.199943 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.223155 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.241930 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.258763 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.279267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.279303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.279312 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.279327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.279337 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.284440 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.305544 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.321505 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.337135 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.355091 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.369477 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.382173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.382245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.382282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.382297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.382306 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.384584 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.481684 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:28:01.495626892 +0000 UTC Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.484682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.484748 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.484772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.484805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.484832 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.539297 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:06 crc kubenswrapper[4735]: E0131 14:59:06.539790 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.539309 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:06 crc kubenswrapper[4735]: E0131 14:59:06.540215 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.588381 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.588461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.588473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.588499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.588518 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.691626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.691667 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.691696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.691712 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.691732 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.794992 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.795051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.795069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.795095 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.795114 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.800619 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f" exitCode=0 Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.800716 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.800777 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.824970 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.844797 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.860336 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.879133 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.897144 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.898376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.898451 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.898472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.898494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.898506 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:06Z","lastTransitionTime":"2026-01-31T14:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.917029 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.937418 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.951302 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.962406 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.976677 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:06 crc kubenswrapper[4735]: I0131 14:59:06.998242 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.001143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.001218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.001232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.001255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.001271 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.016080 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.030310 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.061171 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.089120 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.103555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.103719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.103734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.103754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.103765 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.206093 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.206126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.206135 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.206149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.206159 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.309052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.309101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.309114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.309136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.309154 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.411639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.411679 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.411687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.411701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.411711 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.482626 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:58:00.126676792 +0000 UTC Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.515719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.516219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.516242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.516406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.516663 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.540141 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:07 crc kubenswrapper[4735]: E0131 14:59:07.540322 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.625624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.625685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.625701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.625723 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.625744 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.738197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.738246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.738260 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.738279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.738291 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.808179 4735 generic.go:334] "Generic (PLEG): container finished" podID="28589772-80ef-4a88-9b68-eb15b241ef7f" containerID="8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb" exitCode=0 Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.808255 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerDied","Data":"8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.808394 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.836056 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.840534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.840703 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.840805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.840876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.840933 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.849924 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.870879 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.885675 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.899573 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.911234 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.927360 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.956769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.956822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.956840 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.956864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.956878 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:07Z","lastTransitionTime":"2026-01-31T14:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.973091 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:07 crc kubenswrapper[4735]: I0131 14:59:07.988942 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.001757 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.035436 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.050786 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.060235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.060286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.060300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.060323 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.060337 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.067732 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.085131 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.101717 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.162666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.162701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.162711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.162726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.162738 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.266928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.266997 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.267007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.267028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.267043 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.369439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.369476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.369489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.369507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.369519 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.471942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.471989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.472005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.472025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.472041 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.482821 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:58:35.487629196 +0000 UTC Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.539944 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:08 crc kubenswrapper[4735]: E0131 14:59:08.540176 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.540642 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:08 crc kubenswrapper[4735]: E0131 14:59:08.540846 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.576243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.576302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.576321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.576345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.576363 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.678953 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.679023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.679040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.679074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.679092 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.782311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.782387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.782408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.782472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.782492 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.821017 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" event={"ID":"28589772-80ef-4a88-9b68-eb15b241ef7f","Type":"ContainerStarted","Data":"38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.824739 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/0.log" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.828182 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace" exitCode=1 Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.828237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.829228 4735 scope.go:117] "RemoveContainer" containerID="1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.845673 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.871386 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.886181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.886217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.886230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.886248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.886262 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.895165 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.915791 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.930309 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.945604 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.962261 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.979880 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.989518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.989559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.989580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.989608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:08 crc kubenswrapper[4735]: I0131 14:59:08.989627 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:08Z","lastTransitionTime":"2026-01-31T14:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:08.995782 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.013928 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.034297 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.051709 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.068855 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.083213 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.093169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.093242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.093261 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.093287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.093304 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.096300 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.111904 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.123647 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.147088 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.166824 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.186731 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.196199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.196288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.196566 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.196611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.196630 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.204865 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.235343 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.270232 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.294243 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.299470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.299521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.299538 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.299562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.299578 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.316961 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.354985 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.372536 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.387130 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.401974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.402040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.402053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.402072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.402087 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.403697 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.421160 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.483795 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:51:21.275284991 +0000 UTC Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.505624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.505681 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.505700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.505726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.505745 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.539681 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:09 crc kubenswrapper[4735]: E0131 14:59:09.539924 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.610623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.610694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.610705 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.610724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.610738 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.714064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.714125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.714141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.714160 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.714178 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.816515 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.816587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.816612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.816649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.816675 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.833732 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/0.log" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.837526 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.837760 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.859389 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.880926 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.895699 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.918072 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.919376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.919455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.919468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.919489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.919503 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:09Z","lastTransitionTime":"2026-01-31T14:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.943470 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:09 crc kubenswrapper[4735]: I0131 14:59:09.978007 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.000187 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.022710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.022774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.022793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.022822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.022842 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.023084 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.055807 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.076609 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.093844 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.108836 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.122340 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.125687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.125722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.125731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.125747 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.125760 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.134355 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.145161 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.229847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.229915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.229927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.229948 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.229963 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.333221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.333287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.333309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.333338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.333356 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.436271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.436325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.436341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.436366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.436376 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.484764 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:09:54.102858138 +0000 UTC Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539170 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539222 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:10 crc kubenswrapper[4735]: E0131 14:59:10.539350 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539522 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.539549 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: E0131 14:59:10.539567 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.642714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.642788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.642807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.642834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.642855 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.746607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.746659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.746679 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.746704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.746723 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.845453 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/1.log" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.846728 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/0.log" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.849671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.849735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.849753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.849779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.849800 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.852529 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998" exitCode=1 Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.852610 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.852687 4735 scope.go:117] "RemoveContainer" containerID="1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.854007 4735 scope.go:117] "RemoveContainer" containerID="d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998" Jan 31 14:59:10 crc kubenswrapper[4735]: E0131 14:59:10.854344 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.876499 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.899159 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.919183 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.937222 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.952256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.952305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.952334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.952353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.952366 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:10Z","lastTransitionTime":"2026-01-31T14:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.955941 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:10 crc kubenswrapper[4735]: I0131 14:59:10.978940 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:10.999868 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.020894 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.027390 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66"] Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.028135 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.032196 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.032828 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.051034 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.055880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.055928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.055947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.056694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.056777 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.073841 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.103247 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.125234 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.125300 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcrx\" (UniqueName: \"kubernetes.io/projected/811d304b-a115-4b54-a0dd-e6b9c33cae90-kube-api-access-lpcrx\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.125360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.125388 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.126209 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.145293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.160356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.160556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.160587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.160619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.160640 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.176842 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.199270 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.219601 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.226125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcrx\" (UniqueName: \"kubernetes.io/projected/811d304b-a115-4b54-a0dd-e6b9c33cae90-kube-api-access-lpcrx\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.226197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.226257 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.226293 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.227377 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.229112 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.234799 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.244228 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/811d304b-a115-4b54-a0dd-e6b9c33cae90-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.253702 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcrx\" (UniqueName: \"kubernetes.io/projected/811d304b-a115-4b54-a0dd-e6b9c33cae90-kube-api-access-lpcrx\") pod \"ovnkube-control-plane-749d76644c-4pv66\" (UID: \"811d304b-a115-4b54-a0dd-e6b9c33cae90\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.261100 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.264291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.264338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.264351 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.264371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.264384 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.283388 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.308578 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.328016 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.363184 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.367596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.367635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.367650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.367669 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.367683 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.369959 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.393677 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.434948 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.457099 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.472203 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.472258 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.472273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.472294 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.472309 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.476067 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.485467 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:42:29.552207156 +0000 UTC Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.495686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.514936 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.531283 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.539963 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:11 crc kubenswrapper[4735]: E0131 14:59:11.540145 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.548779 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.561809 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.574657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.574717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.574753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.574775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.574788 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.680500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.680576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.680598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.680626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.680650 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.783268 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.783332 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.783348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.783375 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.783391 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.857586 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" event={"ID":"811d304b-a115-4b54-a0dd-e6b9c33cae90","Type":"ContainerStarted","Data":"905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.857966 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" event={"ID":"811d304b-a115-4b54-a0dd-e6b9c33cae90","Type":"ContainerStarted","Data":"e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.858153 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" event={"ID":"811d304b-a115-4b54-a0dd-e6b9c33cae90","Type":"ContainerStarted","Data":"c22a461e4cb434271ed2ce86d3970feb41639fd7a933fe75f041eb2f537f45b9"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.861833 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/1.log" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.876990 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.886316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.886358 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.886366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.886382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.886395 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.899413 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.925591 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.950331 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.964547 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.981124 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.989077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.989112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.989121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.989134 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:11 crc kubenswrapper[4735]: I0131 14:59:11.989143 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:11Z","lastTransitionTime":"2026-01-31T14:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.001038 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.026827 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.040191 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.059553 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.078539 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.091647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.091683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.091695 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.091711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.091725 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.105288 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.120112 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.137319 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.148576 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rqxxz"] Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.149470 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.149586 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.164333 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.186015 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.195046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.195088 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.195102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.195120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.195136 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.203907 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.238602 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.238885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g47k\" (UniqueName: \"kubernetes.io/projected/ea89cfa6-d46d-4cda-a91e-a1d06a743204-kube-api-access-5g47k\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.238954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.239061 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.239233 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:59:28.239203032 +0000 UTC m=+54.012532104 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.239344 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.239479 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:28.239458959 +0000 UTC m=+54.012788041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.245870 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.269019 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.296022 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.298133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.298226 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.298247 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.298311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.298330 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.330347 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.340852 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g47k\" (UniqueName: \"kubernetes.io/projected/ea89cfa6-d46d-4cda-a91e-a1d06a743204-kube-api-access-5g47k\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.340944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.340989 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.341028 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.341070 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341266 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341296 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341316 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341392 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:28.34136329 +0000 UTC m=+54.114692372 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341462 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341571 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:28.341541976 +0000 UTC m=+54.114871158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341691 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341719 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341741 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341799 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:28.341781563 +0000 UTC m=+54.115110765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341889 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.341941 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:12.841924117 +0000 UTC m=+38.615253189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.354906 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.363018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g47k\" (UniqueName: \"kubernetes.io/projected/ea89cfa6-d46d-4cda-a91e-a1d06a743204-kube-api-access-5g47k\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.377259 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.401015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.401059 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.401072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.401089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.401103 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.400887 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.424114 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.444877 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.462815 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.481764 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.486151 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:31:24.778580247 +0000 UTC Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.497796 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.504575 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.504618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.504630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.504647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.504694 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.514697 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.527763 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.539761 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.539797 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.539894 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.540011 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.543533 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.559283 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.608457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.608530 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.608543 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.608565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.608580 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.688680 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.688763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.688783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.688817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.688842 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.709149 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.715088 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.715162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.715177 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.715197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.715530 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.734585 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.740027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.740099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.740118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.740147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.740167 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.756844 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.761945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.762009 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.762027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.762055 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.762079 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.781759 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.786987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.787056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.787184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.787262 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.787290 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.807545 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.807741 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.810259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.810325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.810349 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.810379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.810402 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.847308 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.847523 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: E0131 14:59:12.847604 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:13.847585599 +0000 UTC m=+39.620914641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.913720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.913787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.913812 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.913842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:12 crc kubenswrapper[4735]: I0131 14:59:12.913867 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:12Z","lastTransitionTime":"2026-01-31T14:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.016543 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.016588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.016603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.016620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.016631 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.119303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.119374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.119391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.119417 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.119462 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.223519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.223588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.223611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.223637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.223656 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.327355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.327474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.327493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.327520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.327537 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.430345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.430407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.430480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.430507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.430524 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.486993 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:38:42.83328709 +0000 UTC Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.534392 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.534486 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.534505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.534529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.534546 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.540652 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.540703 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:13 crc kubenswrapper[4735]: E0131 14:59:13.540836 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:13 crc kubenswrapper[4735]: E0131 14:59:13.540982 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.636714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.636762 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.636775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.636791 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.637030 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.747009 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.747066 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.747087 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.747125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.747147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.851459 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.851521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.851535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.851559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.851575 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.858159 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:13 crc kubenswrapper[4735]: E0131 14:59:13.858472 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:13 crc kubenswrapper[4735]: E0131 14:59:13.858585 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:15.858555847 +0000 UTC m=+41.631884919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.954804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.955513 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.955730 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.955961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:13 crc kubenswrapper[4735]: I0131 14:59:13.956154 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:13Z","lastTransitionTime":"2026-01-31T14:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.059486 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.059552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.059573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.059596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.059615 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.162129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.162524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.162748 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.162915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.163061 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.266097 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.266137 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.266145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.266159 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.266168 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.369350 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.369492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.369515 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.369543 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.369577 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.472416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.472534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.472557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.472594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.472620 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.487412 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:54:13.686716606 +0000 UTC Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.540059 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.540267 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:14 crc kubenswrapper[4735]: E0131 14:59:14.540388 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:14 crc kubenswrapper[4735]: E0131 14:59:14.540622 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.575760 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.575802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.575817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.575863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.575881 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.679264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.679339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.679366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.679395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.679419 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.782192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.782278 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.782298 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.782327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.782345 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.885035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.885107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.885132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.885161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.885179 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.988408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.988495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.988509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.988536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:14 crc kubenswrapper[4735]: I0131 14:59:14.988552 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:14Z","lastTransitionTime":"2026-01-31T14:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.092863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.092940 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.092962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.092993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.093015 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.196010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.196406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.196499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.196530 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.196549 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.299296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.299356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.299366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.299387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.299401 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.402184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.402286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.402298 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.402331 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.402341 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.487647 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:41:53.202713511 +0000 UTC Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.505178 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.505250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.505275 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.505305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.505322 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.539860 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:15 crc kubenswrapper[4735]: E0131 14:59:15.540249 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.540296 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:15 crc kubenswrapper[4735]: E0131 14:59:15.540457 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.542246 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.570637 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.592582 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.607788 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.608896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.608952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.608971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.608995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.609012 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.625618 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.647067 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.669217 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.702770 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.711837 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.711893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.711917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.711952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.711976 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.726231 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.752945 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.814909 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.815227 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.815236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.815250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.815259 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.818773 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.836543 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.847686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.860483 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.875074 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.883783 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:15 crc kubenswrapper[4735]: E0131 14:59:15.883910 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:15 crc kubenswrapper[4735]: E0131 14:59:15.883956 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:19.883943131 +0000 UTC m=+45.657272173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.887558 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.887876 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.889877 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.890270 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.902066 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.914315 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.917891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.917939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.917949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.917966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.917976 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:15Z","lastTransitionTime":"2026-01-31T14:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.928854 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.951936 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.968330 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:15 crc kubenswrapper[4735]: I0131 14:59:15.983269 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.015516 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db25862cf5ed6e3c70ec967ed588819bf34708f88023aeb66891024eb671ace\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"message\\\":\\\"ce event handler 1 for removal\\\\nI0131 14:59:08.421567 5960 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:59:08.421599 5960 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:59:08.421619 5960 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:59:08.421689 5960 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:59:08.421700 5960 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:59:08.421760 5960 factory.go:656] Stopping watch factory\\\\nI0131 14:59:08.421760 5960 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:59:08.422019 5960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:59:08.422073 5960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:59:08.422118 5960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:59:08.422154 5960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:59:08.422191 5960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:59:08.422234 5960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:59:08.422276 5960 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.020377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.020494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.020514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.020548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.020567 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.037714 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.051824 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.070697 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.083665 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.099782 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.111775 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.128379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.128476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.128509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.128548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.128570 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.132682 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.151163 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.167567 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.182166 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.202278 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.219694 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.232218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.232286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.232309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.232338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.232358 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.335233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.335296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.335318 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.335343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.335362 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.438750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.438803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.438816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.438835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.438848 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.489481 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:59:43.111064753 +0000 UTC Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.540067 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.540085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:16 crc kubenswrapper[4735]: E0131 14:59:16.540237 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:16 crc kubenswrapper[4735]: E0131 14:59:16.540303 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.541817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.541862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.541879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.541901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.541918 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.645334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.645472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.645540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.645577 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.645597 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.750197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.750267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.750285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.750313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.750333 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.854064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.854141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.854170 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.854208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.854230 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.958184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.958282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.958306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.958343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:16 crc kubenswrapper[4735]: I0131 14:59:16.958372 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:16Z","lastTransitionTime":"2026-01-31T14:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.062617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.062683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.062703 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.062727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.062747 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.165593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.165712 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.165731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.165754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.165770 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.269077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.269132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.269145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.269166 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.269176 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.372508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.372572 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.372596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.372646 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.372672 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.476005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.476077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.476096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.476120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.476139 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.490447 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:28:45.600723319 +0000 UTC Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.540221 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.540237 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:17 crc kubenswrapper[4735]: E0131 14:59:17.540615 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:17 crc kubenswrapper[4735]: E0131 14:59:17.540819 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.579964 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.580036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.580057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.580087 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.580109 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.683299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.683385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.683405 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.683460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.683483 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.787344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.787412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.787473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.787505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.787524 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.891008 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.891076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.891093 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.891124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.891144 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.994636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.994717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.994738 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.994768 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:17 crc kubenswrapper[4735]: I0131 14:59:17.994790 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:17Z","lastTransitionTime":"2026-01-31T14:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.098699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.098776 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.098799 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.098828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.098849 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.201821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.201869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.201882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.201900 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.201913 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.304537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.304612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.304637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.304666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.304694 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.408169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.408208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.408218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.408234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.408246 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.490971 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:25:47.162247114 +0000 UTC Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.512145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.512193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.512211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.512237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.512255 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.539768 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.539839 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:18 crc kubenswrapper[4735]: E0131 14:59:18.539949 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:18 crc kubenswrapper[4735]: E0131 14:59:18.540126 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.615868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.615925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.615942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.615963 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.615978 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.718806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.718863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.718882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.718909 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.718932 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.822978 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.823032 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.823057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.823086 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.823123 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.925363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.925509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.925532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.925557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:18 crc kubenswrapper[4735]: I0131 14:59:18.925577 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:18Z","lastTransitionTime":"2026-01-31T14:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.028867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.028946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.028958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.028979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.028993 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.131724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.131782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.131801 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.131828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.131848 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.174655 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.175860 4735 scope.go:117] "RemoveContainer" containerID="d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998" Jan 31 14:59:19 crc kubenswrapper[4735]: E0131 14:59:19.176137 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.195309 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.210852 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234543 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234886 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.234927 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.250504 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.268391 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.286686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.306942 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.322781 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.338566 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.338607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.338647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.338665 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.338678 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.343956 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.362382 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.380767 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.413409 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.439192 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.442610 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.442802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.442903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.442925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.442961 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.460802 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.481141 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.491323 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:56:26.777713042 +0000 UTC Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.508312 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.529776 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:19Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.539330 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.539339 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:19 crc kubenswrapper[4735]: E0131 14:59:19.539659 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:19 crc kubenswrapper[4735]: E0131 14:59:19.539745 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.546782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.546867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.546926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.546966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.546993 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.650663 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.650719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.650746 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.650779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.650803 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.753373 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.753464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.753489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.753521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.753546 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.857019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.857108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.857133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.857163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.857184 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.936156 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:19 crc kubenswrapper[4735]: E0131 14:59:19.936387 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:19 crc kubenswrapper[4735]: E0131 14:59:19.936469 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:27.936453499 +0000 UTC m=+53.709782541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.960211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.960286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.960309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.960342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:19 crc kubenswrapper[4735]: I0131 14:59:19.960368 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:19Z","lastTransitionTime":"2026-01-31T14:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.063370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.063440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.063449 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.063464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.063474 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.166242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.166299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.166318 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.166344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.166363 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.270006 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.270125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.270182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.270208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.270228 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.373645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.373720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.373744 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.373771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.373792 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.477692 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.477740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.477751 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.477768 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.477781 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.491998 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:54:24.790343681 +0000 UTC Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.540001 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.540076 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:20 crc kubenswrapper[4735]: E0131 14:59:20.540211 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:20 crc kubenswrapper[4735]: E0131 14:59:20.540259 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.580212 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.580272 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.580283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.580299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.580309 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.683139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.683187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.683200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.683217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.683229 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.786064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.786117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.786138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.786162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.786180 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.890200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.890259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.890277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.890300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.890317 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.993477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.993562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.993576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.993597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:20 crc kubenswrapper[4735]: I0131 14:59:20.993633 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:20Z","lastTransitionTime":"2026-01-31T14:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.096533 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.096601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.096622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.096650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.096670 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.199947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.200314 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.200488 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.200625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.200774 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.304787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.304860 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.304878 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.304905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.304923 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.408071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.408115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.408126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.408143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.408154 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.492536 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:30:11.80432747 +0000 UTC Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.511079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.511135 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.511151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.511173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.511189 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.540050 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.540146 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:21 crc kubenswrapper[4735]: E0131 14:59:21.540249 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:21 crc kubenswrapper[4735]: E0131 14:59:21.540473 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.614064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.614117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.614129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.614149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.614164 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.716580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.716613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.716622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.716635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.716644 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.820075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.820150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.820159 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.820177 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.820192 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.922630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.922675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.922694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.922710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:21 crc kubenswrapper[4735]: I0131 14:59:21.922720 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:21Z","lastTransitionTime":"2026-01-31T14:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.025633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.025688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.025700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.025719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.025733 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.128534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.128590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.128611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.128627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.128638 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.231980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.232049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.232062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.232085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.232102 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.334307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.334378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.334390 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.334409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.334438 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.438289 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.438365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.438384 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.438415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.438475 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.492717 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:06:38.413242109 +0000 UTC Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.539611 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.539745 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:22 crc kubenswrapper[4735]: E0131 14:59:22.539776 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:22 crc kubenswrapper[4735]: E0131 14:59:22.540006 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.542563 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.542612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.542626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.542645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.542659 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.645914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.646004 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.646029 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.646064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.646091 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.750290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.750371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.750393 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.750453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.750480 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.855510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.855611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.855642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.855680 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.855705 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.958745 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.958802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.958816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.958836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:22 crc kubenswrapper[4735]: I0131 14:59:22.958850 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:22Z","lastTransitionTime":"2026-01-31T14:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.061365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.061455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.061470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.061493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.061507 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.164815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.165197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.165265 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.165342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.165409 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.181902 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.181943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.181955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.181978 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.181994 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.200567 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:23Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.205609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.205834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.205926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.206031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.206119 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.221774 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:23Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.227583 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.227652 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.227668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.227698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.227717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.248467 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:23Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.253784 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.253829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.253843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.253864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.253879 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.278078 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:23Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.282659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.282719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.282739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.282766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.282786 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.305755 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:23Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.305878 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.308203 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.308276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.308300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.308338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.308364 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.411108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.411187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.411213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.411249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.411272 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.493619 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:07:11.927739622 +0000 UTC Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.514127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.514164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.514174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.514191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.514203 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.540673 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.540815 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.540921 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:23 crc kubenswrapper[4735]: E0131 14:59:23.541204 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.616370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.616480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.616505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.616534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.616557 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.719315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.719611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.719642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.719673 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.719697 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.822036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.822077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.822089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.822106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.822121 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.924499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.924579 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.924598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.924631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:23 crc kubenswrapper[4735]: I0131 14:59:23.924650 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:23Z","lastTransitionTime":"2026-01-31T14:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.027816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.027876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.027889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.027911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.027926 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.130858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.130924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.130942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.130969 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.130989 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.233918 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.233962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.233971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.233988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.234003 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.336211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.336268 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.336281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.336305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.336319 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.439752 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.439813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.439823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.439843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.439854 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.494398 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:59:36.500118494 +0000 UTC Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.539320 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.539407 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:24 crc kubenswrapper[4735]: E0131 14:59:24.539587 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:24 crc kubenswrapper[4735]: E0131 14:59:24.539775 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.543003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.543040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.543053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.543070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.543084 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.592317 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.605228 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.617514 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.639148 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.646649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.646707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.646725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.646754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.646774 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.654688 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.679322 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.699467 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.721737 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.747766 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.750218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.750289 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.750313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.750342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.750367 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.765140 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.781116 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.814965 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.837083 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.853584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.853639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.853658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.853682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.853701 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.856302 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.876293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.898739 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.922486 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.942359 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.956951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.957023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.957041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.957067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.957089 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:24Z","lastTransitionTime":"2026-01-31T14:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:24 crc kubenswrapper[4735]: I0131 14:59:24.959494 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:24Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.060573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.060643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.060658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.060675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.060688 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.164324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.164399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.164447 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.164471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.164490 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.267899 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.267949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.267962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.267980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.267992 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.371125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.371194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.371209 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.371233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.371251 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.474479 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.474542 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.474556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.474574 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.474586 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.495062 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:18:46.298785724 +0000 UTC Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.539093 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.539162 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:25 crc kubenswrapper[4735]: E0131 14:59:25.539345 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:25 crc kubenswrapper[4735]: E0131 14:59:25.539508 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.562315 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.577848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.578002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.578037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.578074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.578098 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.587188 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.606136 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.631635 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.653071 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.672749 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.680960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.681022 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.681046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.681077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.681101 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.687848 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.707129 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.724198 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.752881 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.777221 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.783735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.783787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.783804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.783828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.783847 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.794071 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.828312 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.854925 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.870792 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.887480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.887556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.887575 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.887610 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.887629 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.891087 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.912546 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.933653 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:25Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.991812 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.991894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.991919 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.991954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:25 crc kubenswrapper[4735]: I0131 14:59:25.991977 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:25Z","lastTransitionTime":"2026-01-31T14:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.094914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.094963 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.094977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.094993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.095006 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.197578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.197633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.197651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.197677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.197697 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.300256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.300301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.300312 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.300329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.300341 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.404153 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.404245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.404269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.404329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.404349 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.495928 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:47:12.648445622 +0000 UTC Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.506828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.506874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.506890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.506914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.506929 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.539365 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.539397 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:26 crc kubenswrapper[4735]: E0131 14:59:26.539549 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:26 crc kubenswrapper[4735]: E0131 14:59:26.539661 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.610012 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.610053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.610061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.610075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.610084 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.716583 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.716642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.716664 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.716692 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.716717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.819547 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.819582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.819592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.819607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.819617 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.922608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.922666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.922677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.922694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:26 crc kubenswrapper[4735]: I0131 14:59:26.922706 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:26Z","lastTransitionTime":"2026-01-31T14:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.026402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.026490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.026499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.026519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.026532 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.129444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.129497 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.129507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.129528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.129539 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.232083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.232133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.232142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.232161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.232175 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.335046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.335094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.335180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.335204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.335222 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.438471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.438537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.438556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.438581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.438597 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.497144 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:28:26.799238881 +0000 UTC Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.540039 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:27 crc kubenswrapper[4735]: E0131 14:59:27.540210 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.540935 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:27 crc kubenswrapper[4735]: E0131 14:59:27.541123 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.542344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.542379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.542394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.542412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.542454 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.645537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.645590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.645601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.645618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.645632 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.742466 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.748987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.749347 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.749602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.749814 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.749994 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.766821 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.785974 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.807295 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.825544 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.844877 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.853414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.853535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.853563 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.853600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.853623 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.861137 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.878653 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.896587 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.927823 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.949145 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956776 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956814 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:27Z","lastTransitionTime":"2026-01-31T14:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:27 crc kubenswrapper[4735]: E0131 14:59:27.956869 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.956760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:27 crc kubenswrapper[4735]: E0131 14:59:27.956926 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 14:59:43.956908385 +0000 UTC m=+69.730237427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.975988 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:27 crc kubenswrapper[4735]: I0131 14:59:27.993204 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.008291 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.019044 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.038574 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.052242 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.059472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.059630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.059702 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.059781 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.059850 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.065083 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.085867 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.165877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.165951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.165971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.165997 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.166016 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.261243 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.261468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.261519 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:00.261473585 +0000 UTC m=+86.034802667 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.261644 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.261749 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 15:00:00.261717832 +0000 UTC m=+86.035047044 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.269772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.269839 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.269856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.269886 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.269903 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.362899 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.362973 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.363041 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363256 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363285 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363281 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363304 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363309 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363574 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363601 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363603 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 15:00:00.363516121 +0000 UTC m=+86.136845323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363726 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 15:00:00.363660135 +0000 UTC m=+86.136989217 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.363758 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 15:00:00.363742867 +0000 UTC m=+86.137072159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.373192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.373249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.373271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.373301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.373323 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.478344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.478448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.478474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.478505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.478532 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.497952 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:08:43.314745971 +0000 UTC Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.539689 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.539693 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.539888 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:28 crc kubenswrapper[4735]: E0131 14:59:28.539981 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.581795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.581860 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.581872 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.581893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.581905 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.685281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.685551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.685567 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.685588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.685602 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.789480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.789598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.789614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.789643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.789655 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.893037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.893109 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.893121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.893141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.893155 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.995954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.996007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.996049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.996072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:28 crc kubenswrapper[4735]: I0131 14:59:28.996086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:28Z","lastTransitionTime":"2026-01-31T14:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.099231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.099283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.099293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.099312 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.099323 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.201763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.201836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.201855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.201883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.201907 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.304891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.304936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.304945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.304967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.304977 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.408342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.408416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.408457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.408482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.408500 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.498233 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:55:56.074733621 +0000 UTC Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.512049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.512110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.512127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.512180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.512198 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.539493 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.539650 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:29 crc kubenswrapper[4735]: E0131 14:59:29.539820 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:29 crc kubenswrapper[4735]: E0131 14:59:29.540316 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.540839 4735 scope.go:117] "RemoveContainer" containerID="d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.615618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.616035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.616280 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.616508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.616740 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.720416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.720505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.720517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.720546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.720561 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.823048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.823089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.823099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.823123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.823133 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.925795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.925850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.925867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.925889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.925904 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:29Z","lastTransitionTime":"2026-01-31T14:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.947671 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/1.log" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.951209 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b"} Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.951820 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.971123 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:29 crc kubenswrapper[4735]: I0131 14:59:29.990408 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.010316 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.027853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.027890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.027899 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.027915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.027948 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.039994 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.058990 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.071936 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.093012 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.109351 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.122539 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.130442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.130498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.130514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.130533 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.130546 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.135774 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.146280 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.158416 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.174709 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.191695 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.205191 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.218446 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.232921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.232960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.232970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.232986 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.232996 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.233245 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.245718 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.336329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.336379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.336392 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.336412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.336444 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.439676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.439758 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.439783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.439815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.439836 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.499284 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:17:50.557596986 +0000 UTC Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.539758 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.539802 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:30 crc kubenswrapper[4735]: E0131 14:59:30.539963 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:30 crc kubenswrapper[4735]: E0131 14:59:30.540482 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.542416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.542474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.542491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.542511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.542522 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.646234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.646306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.646328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.646354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.646372 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.749638 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.749707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.749728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.749757 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.749779 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.853409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.853552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.853567 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.853593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.853609 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.959418 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.959534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.959557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.959589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.959615 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:30Z","lastTransitionTime":"2026-01-31T14:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.962326 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/2.log" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.963655 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/1.log" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.968130 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" exitCode=1 Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.968184 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b"} Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.968236 4735 scope.go:117] "RemoveContainer" containerID="d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.969557 4735 scope.go:117] "RemoveContainer" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" Jan 31 14:59:30 crc kubenswrapper[4735]: E0131 14:59:30.969928 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 14:59:30 crc kubenswrapper[4735]: I0131 14:59:30.984301 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.005295 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.021470 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.038218 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.053469 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.062879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.063035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.063099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.063301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.063369 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.070234 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.087686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.099531 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.128974 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.147631 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.161493 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.166921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.166992 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.167018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.167047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.167069 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.180288 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d539f16e4cacc3961143da9265646cc1eb4c6973cb16c03fcbac29e4e0a63998\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:09Z\\\",\\\"message\\\":\\\"1.Pod openshift-multus/multus-hg7gl after 0 failed attempt(s)\\\\nI0131 14:59:09.866735 6158 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-hg7gl\\\\nF0131 14:59:09.866276 6158 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:09Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:59:09.866683 6158 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0131 14:59:09.866788 6158 services_controller.go:443] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.199824 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.212579 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.225774 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.239920 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.256154 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.269244 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.269286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.269302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.269324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.269340 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.271002 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.372532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.372576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.372589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.372607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.372619 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.476241 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.476307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.476323 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.476343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.476360 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.499906 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:17:47.079980522 +0000 UTC Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.539404 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.539415 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:31 crc kubenswrapper[4735]: E0131 14:59:31.539664 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:31 crc kubenswrapper[4735]: E0131 14:59:31.539759 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.579728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.579776 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.579789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.579808 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.579822 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.683330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.683381 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.683390 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.683406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.683416 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.786797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.786842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.786852 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.786866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.786876 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.889868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.889947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.889972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.890000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.890026 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.974479 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/2.log" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.979767 4735 scope.go:117] "RemoveContainer" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" Jan 31 14:59:31 crc kubenswrapper[4735]: E0131 14:59:31.980069 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.992816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.992855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.992866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.992886 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:31 crc kubenswrapper[4735]: I0131 14:59:31.992895 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:31Z","lastTransitionTime":"2026-01-31T14:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.006447 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.023551 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.037413 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.052197 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.066972 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.079788 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.095048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.095075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.095083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.095097 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.095106 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.101473 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.121128 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.137219 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.165218 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.184529 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.197983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.198042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.198061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.198086 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.198111 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.198347 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.215616 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.230798 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.245700 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.259358 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.274516 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.287309 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.303653 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.303724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.303733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.303754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.303767 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.407777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.407839 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.407850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.407870 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.407883 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.500983 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:47:49.338291302 +0000 UTC Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.515893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.516704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.516754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.516793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.516819 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.539242 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.539284 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:32 crc kubenswrapper[4735]: E0131 14:59:32.539463 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:32 crc kubenswrapper[4735]: E0131 14:59:32.539679 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.620090 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.620164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.620186 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.620221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.620245 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.723328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.723395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.723409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.723459 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.723475 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.826936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.827012 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.827029 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.827062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.827084 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.930169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.930219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.930232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.930251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:32 crc kubenswrapper[4735]: I0131 14:59:32.930268 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:32Z","lastTransitionTime":"2026-01-31T14:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.032962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.033012 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.033023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.033039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.033049 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.135525 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.135594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.135612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.135658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.135676 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.237902 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.237949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.237962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.237977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.237987 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.340871 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.340928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.340940 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.340958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.340968 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.378803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.378849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.378860 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.378878 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.378891 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.394683 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.399449 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.399491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.399502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.399520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.399530 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.420594 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.428110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.428170 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.428187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.428213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.428231 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.447314 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.452756 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.452808 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.452825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.452847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.452867 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.477275 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.482385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.482482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.482500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.482524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.482541 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.501136 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.501243 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.501239 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:47:01.436288841 +0000 UTC Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.503108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.503157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.503175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.503197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.503215 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.539930 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.540066 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.540230 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:33 crc kubenswrapper[4735]: E0131 14:59:33.540400 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.605953 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.606039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.606075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.606112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.606137 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.709514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.709580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.709597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.709624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.709641 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.813603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.813712 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.813734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.813759 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.813786 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.917060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.917162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.917180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.917213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:33 crc kubenswrapper[4735]: I0131 14:59:33.917235 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:33Z","lastTransitionTime":"2026-01-31T14:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.020177 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.020248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.020272 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.020303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.020327 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.123498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.123566 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.123586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.123616 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.123640 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.233493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.233581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.233606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.233638 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.233667 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.337398 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.337571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.337590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.337619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.337637 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.440282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.440351 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.440370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.440394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.440412 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.502152 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:54:42.69805592 +0000 UTC Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.539394 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.539455 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:34 crc kubenswrapper[4735]: E0131 14:59:34.539649 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:34 crc kubenswrapper[4735]: E0131 14:59:34.539767 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.543078 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.543123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.543140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.543162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.543180 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.646176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.646215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.646224 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.646237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.646246 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.749864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.749922 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.749936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.749962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.749975 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.853386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.853480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.853495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.853512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.853524 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.957624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.957677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.957696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.957721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:34 crc kubenswrapper[4735]: I0131 14:59:34.957739 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:34Z","lastTransitionTime":"2026-01-31T14:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.062044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.062124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.062144 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.062172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.062194 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.164793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.164865 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.164885 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.164912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.164927 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.267882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.267966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.267991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.268022 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.268044 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.371361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.371471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.371487 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.371512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.371527 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.475295 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.475367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.475385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.475448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.475476 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.503077 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:39:37.183275188 +0000 UTC Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.539151 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.539164 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:35 crc kubenswrapper[4735]: E0131 14:59:35.539383 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:35 crc kubenswrapper[4735]: E0131 14:59:35.539582 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.567247 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.578981 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.579051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.579069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.579098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.579116 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.590015 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.618889 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.652494 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.668559 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.681869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.681930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.681955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.681986 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.682008 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.692591 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.714585 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.735168 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.755668 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.776912 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.785065 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.785132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.785151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.785231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.785252 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.796112 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.860948 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.882989 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.888363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.888470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.888492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.888516 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.888537 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.914811 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.953026 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.972527 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.989058 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:35Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.991041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.991101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.991119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.991143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:35 crc kubenswrapper[4735]: I0131 14:59:35.991169 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:35Z","lastTransitionTime":"2026-01-31T14:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.011804 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.095480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.095584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.095604 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.095659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.095680 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.199235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.199293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.199311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.199338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.199363 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.302501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.302585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.302609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.302647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.302671 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.405642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.405708 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.405727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.405754 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.405774 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.504201 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:15:37.070593976 +0000 UTC Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.509788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.509904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.509923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.509949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.509971 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.539213 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.539323 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:36 crc kubenswrapper[4735]: E0131 14:59:36.539520 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:36 crc kubenswrapper[4735]: E0131 14:59:36.539700 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.613677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.613725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.613736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.613750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.613759 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.717319 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.717395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.717417 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.717472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.717494 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.820756 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.820816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.820836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.820864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.820883 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.923978 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.924066 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.924093 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.924123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:36 crc kubenswrapper[4735]: I0131 14:59:36.924144 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:36Z","lastTransitionTime":"2026-01-31T14:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.028083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.028114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.028124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.028139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.028148 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.131335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.131407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.131466 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.131716 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.131745 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.235219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.235286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.235307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.235332 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.235351 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.339683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.339759 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.339779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.339805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.339823 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.443031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.443094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.443107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.443129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.443147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.504580 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:00:25.239560462 +0000 UTC Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.540137 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.540292 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:37 crc kubenswrapper[4735]: E0131 14:59:37.540392 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:37 crc kubenswrapper[4735]: E0131 14:59:37.540575 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.547708 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.547795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.547819 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.547849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.547874 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.651965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.652028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.652046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.652071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.652093 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.756085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.756168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.756191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.756221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.756239 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.860335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.860386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.860399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.860442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.860458 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.963808 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.964112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.964318 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.964350 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:37 crc kubenswrapper[4735]: I0131 14:59:37.964372 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:37Z","lastTransitionTime":"2026-01-31T14:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.068502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.068574 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.068631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.068656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.068675 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.172223 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.172350 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.172378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.172457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.172481 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.276807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.276879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.276903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.276933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.276954 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.380847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.380921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.380934 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.380960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.380977 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.484821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.484894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.484911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.484938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.484957 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.505614 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:10:01.482646777 +0000 UTC Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.539230 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.539299 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:38 crc kubenswrapper[4735]: E0131 14:59:38.539467 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:38 crc kubenswrapper[4735]: E0131 14:59:38.539583 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.588453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.588535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.588554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.588584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.588611 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.691368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.691494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.691518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.691545 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.691573 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.795573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.795649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.795672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.795707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.795728 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.899667 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.899750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.899774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.899806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:38 crc kubenswrapper[4735]: I0131 14:59:38.899825 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:38Z","lastTransitionTime":"2026-01-31T14:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.008355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.008452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.008470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.008503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.008521 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.112545 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.112582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.112593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.112611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.112621 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.217338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.217477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.217499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.217530 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.217553 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.321210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.321263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.321273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.321289 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.321300 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.425343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.425397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.425411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.425463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.425473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.506091 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:33:39.234745066 +0000 UTC Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.528725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.528832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.528857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.528892 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.528913 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.539992 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:39 crc kubenswrapper[4735]: E0131 14:59:39.540094 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.540239 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:39 crc kubenswrapper[4735]: E0131 14:59:39.540294 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.632195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.632246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.632264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.632295 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.632314 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.736656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.736710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.736727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.736753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.736772 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.840072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.840138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.840155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.840181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.840198 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.944173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.944243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.944262 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.944288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:39 crc kubenswrapper[4735]: I0131 14:59:39.944311 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:39Z","lastTransitionTime":"2026-01-31T14:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.048013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.048074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.048089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.048111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.048125 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.151906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.151961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.151972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.151992 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.152002 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.254938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.255017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.255040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.255071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.255094 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.359082 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.359132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.359146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.359166 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.359181 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.461955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.461998 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.462007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.462028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.462038 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.506681 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:03:46.177548618 +0000 UTC Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.539409 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:40 crc kubenswrapper[4735]: E0131 14:59:40.539600 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.539412 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:40 crc kubenswrapper[4735]: E0131 14:59:40.539779 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.564843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.564911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.564929 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.564957 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.564975 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.667804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.667868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.667880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.667911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.667923 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.771070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.771110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.771120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.771135 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.771146 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.874382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.874452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.874462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.874484 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.874494 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.977585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.977834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.977856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.977880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:40 crc kubenswrapper[4735]: I0131 14:59:40.977897 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:40Z","lastTransitionTime":"2026-01-31T14:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.081325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.081400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.081417 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.081469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.081490 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.183975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.184054 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.184066 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.184105 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.184116 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.287573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.287657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.287676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.287704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.287726 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.391034 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.391096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.391114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.391140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.391158 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.494948 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.495031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.495051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.495075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.495090 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.507587 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:43:21.961740047 +0000 UTC Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.539381 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:41 crc kubenswrapper[4735]: E0131 14:59:41.539574 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.539416 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:41 crc kubenswrapper[4735]: E0131 14:59:41.539889 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.598536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.598587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.598600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.598620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.598633 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.701936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.701996 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.702012 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.702038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.702055 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.805624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.805688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.805705 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.805732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.805751 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.908713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.908784 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.908807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.908835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:41 crc kubenswrapper[4735]: I0131 14:59:41.908859 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:41Z","lastTransitionTime":"2026-01-31T14:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.011859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.011932 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.011950 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.011977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.011994 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.114857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.114925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.114967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.114994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.115014 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.218717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.218777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.218787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.218807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.218819 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.321741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.321789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.321800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.321823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.321838 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.424840 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.424894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.424907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.424925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.424938 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.508404 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:13:32.405111944 +0000 UTC Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.531209 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.531281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.531315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.531349 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.531380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.539727 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.539774 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:42 crc kubenswrapper[4735]: E0131 14:59:42.539869 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:42 crc kubenswrapper[4735]: E0131 14:59:42.540024 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.634681 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.634725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.634735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.634750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.634761 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.737699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.737753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.737765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.737786 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.737800 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.839765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.839807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.839818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.839834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.839844 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.942925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.942982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.942995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.943018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:42 crc kubenswrapper[4735]: I0131 14:59:42.943032 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:42Z","lastTransitionTime":"2026-01-31T14:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.046013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.046081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.046100 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.046125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.046144 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.148548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.148590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.148600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.148618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.148628 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.251687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.251759 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.251778 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.251804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.251824 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.354859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.354921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.354937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.354960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.354974 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.457637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.457671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.457681 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.457697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.457708 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.509051 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:04:26.146583464 +0000 UTC Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.540516 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.540666 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.540740 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.540926 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.559464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.559508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.559518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.559535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.559544 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.647117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.647214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.647234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.647259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.647278 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.667332 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.674861 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.674889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.674898 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.674916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.674927 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.688838 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.693558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.693591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.693603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.693617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.693628 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.708822 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.713711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.713767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.713780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.713799 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.713810 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.726261 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.730709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.730744 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.730755 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.730769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.730779 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.743554 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.743671 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.746088 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.746172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.746194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.746222 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.746243 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.849253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.849315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.849328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.849346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.849357 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.952125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.952167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.952175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.952191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.952203 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:43Z","lastTransitionTime":"2026-01-31T14:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:43 crc kubenswrapper[4735]: I0131 14:59:43.962051 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.962287 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:43 crc kubenswrapper[4735]: E0131 14:59:43.962396 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 15:00:15.962375483 +0000 UTC m=+101.735704525 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.054633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.054691 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.054711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.054739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.054760 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.158609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.158667 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.158677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.158695 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.158706 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.261997 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.262058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.262070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.262091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.262103 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.365182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.365232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.365245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.365264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.365276 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.467272 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.467301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.467310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.467322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.467331 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.510004 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:30:14.044549323 +0000 UTC Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.539323 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.539500 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:44 crc kubenswrapper[4735]: E0131 14:59:44.539629 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:44 crc kubenswrapper[4735]: E0131 14:59:44.539914 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.570204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.570281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.570297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.570322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.570338 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.673117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.673150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.673161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.673179 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.673189 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.775523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.775605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.775625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.775699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.775718 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.879030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.879299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.879324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.879351 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.879402 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.983488 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.983554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.983571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.983597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:44 crc kubenswrapper[4735]: I0131 14:59:44.983620 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:44Z","lastTransitionTime":"2026-01-31T14:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.086776 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.086848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.086863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.086889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.086903 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.190381 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.190465 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.190483 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.190507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.190526 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.293298 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.293361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.293371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.293393 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.293406 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.395675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.395722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.395735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.395750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.395762 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.499261 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.499322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.499345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.499376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.499395 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.510654 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:49:51.97222045 +0000 UTC Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.540176 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.540206 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:45 crc kubenswrapper[4735]: E0131 14:59:45.540389 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:45 crc kubenswrapper[4735]: E0131 14:59:45.540594 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.563195 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.584666 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.602558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.602614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.602626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.602648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.602661 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.613478 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.630932 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.648703 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.666441 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.685130 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.701527 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.709072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.709113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.709125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.709145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.709159 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.720058 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.738713 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.762467 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.784233 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.814938 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.819128 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.819160 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.819173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.819192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.819202 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.843021 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.867435 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.882965 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.896660 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.921850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.921882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.921894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.921912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.921926 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:45Z","lastTransitionTime":"2026-01-31T14:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:45 crc kubenswrapper[4735]: I0131 14:59:45.923369 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:45Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.024488 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.024561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.024572 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.024593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.024609 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.127748 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.127818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.127832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.127856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.127875 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.230676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.230727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.230741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.230765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.230781 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.333629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.333675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.333686 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.333701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.333713 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.436513 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.436589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.436614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.436648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.436672 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.511740 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:51:08.565247141 +0000 UTC Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.539626 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.539722 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:46 crc kubenswrapper[4735]: E0131 14:59:46.540503 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.540588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: E0131 14:59:46.540590 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.540618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.540670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.540686 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.540699 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.541907 4735 scope.go:117] "RemoveContainer" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" Jan 31 14:59:46 crc kubenswrapper[4735]: E0131 14:59:46.542202 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.643090 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.643165 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.643190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.643226 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.643250 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.746642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.746721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.746740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.746766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.746787 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.850506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.850548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.850558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.850579 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.850590 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.953290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.953382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.953395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.953430 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:46 crc kubenswrapper[4735]: I0131 14:59:46.953467 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:46Z","lastTransitionTime":"2026-01-31T14:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.034029 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/0.log" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.034117 4735 generic.go:334] "Generic (PLEG): container finished" podID="671e4f66-1c2f-436a-800d-fd3840e9830d" containerID="f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b" exitCode=1 Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.034170 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerDied","Data":"f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.034892 4735 scope.go:117] "RemoveContainer" containerID="f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.055252 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.056631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.056689 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.056714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.056747 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.056769 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.076203 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.098642 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.139102 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.159665 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.159722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.159736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.159762 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.159785 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.175337 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.199713 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.220106 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.234884 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.248332 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.263389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.263455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.263470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.263490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.263504 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.274943 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.293179 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.304804 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.325108 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.341705 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.356853 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.366061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.366101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.366111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.366125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.366138 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.370853 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.384602 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.396454 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.468792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.468848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.468864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.468881 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.468893 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.513311 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:12:51.489936352 +0000 UTC Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.539638 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.539756 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:47 crc kubenswrapper[4735]: E0131 14:59:47.539898 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:47 crc kubenswrapper[4735]: E0131 14:59:47.540188 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.571316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.571376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.571387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.571401 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.571411 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.674042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.674085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.674096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.674114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.674126 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.777819 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.777876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.777897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.777924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.777944 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.880528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.880568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.880578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.880597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.880608 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.984365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.984409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.984440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.984460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:47 crc kubenswrapper[4735]: I0131 14:59:47.984473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:47Z","lastTransitionTime":"2026-01-31T14:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.040843 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/0.log" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.040906 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerStarted","Data":"c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.060385 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.081690 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.086283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.086352 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.086367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.086389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.086402 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.101529 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.113965 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.135949 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.156235 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.176986 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.189718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.189777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.189791 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.189810 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.189823 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.199668 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.217761 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.233295 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.247484 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.260503 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.273848 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.289444 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.297634 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.297704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.297716 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.297741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.297754 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.303476 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.319305 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.333384 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.344639 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.400646 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.400685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.400694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.400709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.400718 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.503786 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.503853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.503868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.503894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.503910 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.514225 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:27:35.236289194 +0000 UTC Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.539632 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.539717 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:48 crc kubenswrapper[4735]: E0131 14:59:48.539831 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:48 crc kubenswrapper[4735]: E0131 14:59:48.540024 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.607611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.607675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.607684 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.607704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.607716 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.710777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.710867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.710876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.710891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.710903 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.814030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.814085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.814096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.814115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.814152 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.916927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.916973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.916983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.917002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:48 crc kubenswrapper[4735]: I0131 14:59:48.917014 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:48Z","lastTransitionTime":"2026-01-31T14:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.021945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.021990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.021999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.022020 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.022031 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.125335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.125390 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.125403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.125434 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.125448 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.229589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.229640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.229650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.229670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.229682 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.331736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.331788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.331798 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.331816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.331828 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.434797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.434841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.434851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.434872 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.434888 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.514620 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:56:59.533535876 +0000 UTC Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.538705 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.538753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.538766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.538785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.538798 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.539016 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:49 crc kubenswrapper[4735]: E0131 14:59:49.539266 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.540213 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:49 crc kubenswrapper[4735]: E0131 14:59:49.540538 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.641852 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.641903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.641914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.641931 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.641944 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.745560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.745936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.746077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.746199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.746513 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.849909 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.849977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.849994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.850019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.850037 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.953687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.953751 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.953772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.953801 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:49 crc kubenswrapper[4735]: I0131 14:59:49.953825 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:49Z","lastTransitionTime":"2026-01-31T14:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.056914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.056987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.057010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.057037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.057059 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.160698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.160753 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.160767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.160787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.160801 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.264029 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.264097 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.264115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.264139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.264161 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.367158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.367249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.367275 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.367310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.367335 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.470386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.470542 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.470559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.470587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.470606 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.515075 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:52:07.913004733 +0000 UTC Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.539576 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.539634 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:50 crc kubenswrapper[4735]: E0131 14:59:50.539848 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:50 crc kubenswrapper[4735]: E0131 14:59:50.539998 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.574126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.574185 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.574199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.574221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.574239 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.678637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.678717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.678743 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.678782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.678807 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.782043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.782124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.782149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.782186 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.782215 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.885088 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.885130 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.885147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.885168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.885184 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.988882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.988933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.988942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.988966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:50 crc kubenswrapper[4735]: I0131 14:59:50.988979 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:50Z","lastTransitionTime":"2026-01-31T14:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.092162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.092215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.092228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.092247 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.092258 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.195202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.195267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.195290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.195324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.195343 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.298190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.298248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.298265 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.298286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.298301 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.402708 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.402761 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.402774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.402795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.402808 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.506811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.506895 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.506916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.506942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.506961 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.515590 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:46:15.389156386 +0000 UTC Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.539368 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.539520 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:51 crc kubenswrapper[4735]: E0131 14:59:51.539638 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:51 crc kubenswrapper[4735]: E0131 14:59:51.539714 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.610973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.611036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.611081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.611109 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.611127 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.713896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.713960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.713970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.713988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.713997 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.817914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.817989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.818011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.818043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.818066 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.922197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.922288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.922316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.922353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:51 crc kubenswrapper[4735]: I0131 14:59:51.922380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:51Z","lastTransitionTime":"2026-01-31T14:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.026058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.026129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.026146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.026176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.026195 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.129833 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.129908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.129926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.129963 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.129982 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.234245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.234315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.234333 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.234361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.234380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.338338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.338415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.338473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.338511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.338537 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.442670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.442757 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.442775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.442801 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.442827 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.516663 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:16:55.405502817 +0000 UTC Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.540041 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.540145 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:52 crc kubenswrapper[4735]: E0131 14:59:52.540297 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:52 crc kubenswrapper[4735]: E0131 14:59:52.540398 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.546855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.546928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.547011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.547042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.547062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.649930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.649984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.649996 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.650019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.650031 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.754204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.754284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.754302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.754330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.754349 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.857599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.857693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.857717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.857746 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.857765 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.960917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.960963 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.960973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.960995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:52 crc kubenswrapper[4735]: I0131 14:59:52.961007 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:52Z","lastTransitionTime":"2026-01-31T14:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.063368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.063442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.063457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.063475 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.063486 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.167776 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.167832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.167846 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.167873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.167887 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.271640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.271733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.271761 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.271795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.271826 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.376297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.376363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.376379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.376400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.376415 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.479271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.479349 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.479367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.479415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.479491 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.517872 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:23:25.188590776 +0000 UTC Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.539674 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.539710 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:53 crc kubenswrapper[4735]: E0131 14:59:53.539916 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:53 crc kubenswrapper[4735]: E0131 14:59:53.540103 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.583473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.583569 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.583595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.583632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.583662 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.686984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.687037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.687050 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.687073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.687093 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.790936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.791031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.791047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.791073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.791090 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.894787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.894877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.894901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.894933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.894954 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.903053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.903152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.903167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.903192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.903207 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: E0131 14:59:53.929789 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.935727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.935785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.935798 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.935822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.935839 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: E0131 14:59:53.958802 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.965266 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.965346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.965367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.965398 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.965417 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:53 crc kubenswrapper[4735]: E0131 14:59:53.986807 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.993147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.993207 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.993222 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.993251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:53 crc kubenswrapper[4735]: I0131 14:59:53.993268 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:53Z","lastTransitionTime":"2026-01-31T14:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: E0131 14:59:54.015616 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:54Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.021951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.022030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.022051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.022079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.022097 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: E0131 14:59:54.047496 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:54Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:54 crc kubenswrapper[4735]: E0131 14:59:54.047798 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.057676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.057758 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.057782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.057813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.057837 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.160919 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.161005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.161031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.161063 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.161087 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.265328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.265391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.265415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.265477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.265501 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.369648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.369709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.369730 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.369758 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.369777 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.472914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.472989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.473009 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.473041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.473062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.518502 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:41:48.543732462 +0000 UTC Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.540114 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.540116 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:54 crc kubenswrapper[4735]: E0131 14:59:54.540377 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:54 crc kubenswrapper[4735]: E0131 14:59:54.540414 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.576771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.576837 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.576858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.576891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.576911 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.682129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.682208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.682228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.682257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.682277 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.785603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.785731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.785752 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.785780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.785799 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.888962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.889036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.889060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.889091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.889113 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.993320 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.993366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.993386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.993416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:54 crc kubenswrapper[4735]: I0131 14:59:54.993467 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:54Z","lastTransitionTime":"2026-01-31T14:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.096951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.097370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.097406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.097466 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.097480 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.200785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.200829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.200841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.200859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.200871 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.304235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.304321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.304339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.304371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.304398 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.407976 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.408055 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.408078 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.408113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.408134 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.511830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.511905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.511930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.511959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.511979 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.519523 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:14:36.024526982 +0000 UTC Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.540040 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:55 crc kubenswrapper[4735]: E0131 14:59:55.540224 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.540287 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:55 crc kubenswrapper[4735]: E0131 14:59:55.540516 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.564787 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.587041 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.611350 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.615941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.616019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.616039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.616067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.616086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.633230 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.652533 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.675314 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.698110 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721348 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721484 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.721614 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.740616 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.766883 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.788812 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.814617 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.826195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.826253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.826274 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.826299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.826316 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.851071 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.876166 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.899318 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.929759 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.929863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.929927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.929955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.930013 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:55Z","lastTransitionTime":"2026-01-31T14:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.936554 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:55 crc kubenswrapper[4735]: I0131 14:59:55.963405 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.015615 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:59:55Z is after 2025-08-24T17:21:41Z" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.032334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.032389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.032401 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.032439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.032455 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.135893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.135961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.135983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.136010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.136030 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.238639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.238695 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.238712 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.238734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.238752 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.341737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.341787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.341797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.341812 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.341822 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.446073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.446137 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.446153 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.446175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.446188 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.520237 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:19:20.893799067 +0000 UTC Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.539314 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.539400 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:56 crc kubenswrapper[4735]: E0131 14:59:56.539567 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:56 crc kubenswrapper[4735]: E0131 14:59:56.539837 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.548362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.548387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.548399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.548416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.548445 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.651778 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.652230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.652335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.652471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.652582 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.755767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.756196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.756391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.756589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.756727 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.861044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.861100 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.861112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.861131 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.861145 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.963741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.963792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.963805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.963824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:56 crc kubenswrapper[4735]: I0131 14:59:56.963836 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:56Z","lastTransitionTime":"2026-01-31T14:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.066623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.066675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.066685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.066739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.066758 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.169736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.169793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.169805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.169829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.169845 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.273324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.273374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.273386 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.273408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.273434 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.376080 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.376143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.376168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.376199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.376217 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.479877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.479944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.479962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.479991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.480012 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.520826 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:33:55.59100297 +0000 UTC Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.539658 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:57 crc kubenswrapper[4735]: E0131 14:59:57.539898 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.539995 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:57 crc kubenswrapper[4735]: E0131 14:59:57.540112 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.582064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.582156 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.582183 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.582219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.582243 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.685664 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.685707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.685717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.685733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.685742 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.789285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.789338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.789349 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.789367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.789380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.892291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.892333 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.892342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.892361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.892375 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.994991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.995032 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.995041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.995058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:57 crc kubenswrapper[4735]: I0131 14:59:57.995070 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:57Z","lastTransitionTime":"2026-01-31T14:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.097496 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.097636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.097664 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.097700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.097761 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.200518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.200580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.200597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.200624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.200644 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.303516 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.303596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.303614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.303643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.303662 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.407219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.407277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.407288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.407308 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.407320 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.510176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.510233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.510250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.510275 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.510294 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.521078 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:29:57.267596093 +0000 UTC Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.539624 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.539741 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:59:58 crc kubenswrapper[4735]: E0131 14:59:58.539858 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:59:58 crc kubenswrapper[4735]: E0131 14:59:58.540044 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.613592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.613666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.613689 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.613717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.613735 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.716788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.716832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.716842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.716858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.716868 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.844146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.844215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.844238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.844269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.844291 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.947605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.947671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.947722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.947746 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:58 crc kubenswrapper[4735]: I0131 14:59:58.947762 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:58Z","lastTransitionTime":"2026-01-31T14:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.050862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.050924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.050941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.050970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.050990 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.155127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.155205 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.155231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.155269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.155295 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.265250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.265327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.265350 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.265389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.265604 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.369834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.369917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.369936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.369967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.369986 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.473552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.473602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.473618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.473641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.473658 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.521933 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:17:54.14230977 +0000 UTC Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.539567 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.539560 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 14:59:59 crc kubenswrapper[4735]: E0131 14:59:59.539793 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:59:59 crc kubenswrapper[4735]: E0131 14:59:59.539954 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.577157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.577235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.577256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.577283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.577305 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.680384 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.680452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.680471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.680490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.680500 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.783592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.783663 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.783674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.783694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.783708 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.887117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.887185 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.887197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.887216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.887230 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.990844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.990923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.990949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.991064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:59:59 crc kubenswrapper[4735]: I0131 14:59:59.991163 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:59:59Z","lastTransitionTime":"2026-01-31T14:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.093790 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.093925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.093952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.093975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.093993 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.197825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.197929 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.197954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.197989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.198011 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.301706 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.301798 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.301817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.301851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.301876 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.308002 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.308312 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.308278789 +0000 UTC m=+150.081607871 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.308480 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.308615 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.308693 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.308677601 +0000 UTC m=+150.082006683 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.405616 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.405832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.405869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.405902 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.405941 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.409694 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.409786 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.409879 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.409919 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.409961 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.409985 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410030 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410067 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410089 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.410053786 +0000 UTC m=+150.183382878 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410098 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410122 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410130 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.410104377 +0000 UTC m=+150.183433459 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.410620 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.410585462 +0000 UTC m=+150.183914544 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.508954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.509025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.509047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.509078 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.509097 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.522812 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:53:43.17909422 +0000 UTC Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.539291 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.539499 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.539689 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:00 crc kubenswrapper[4735]: E0131 15:00:00.540044 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.554385 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.612141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.612217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.612240 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.612271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.612293 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.715037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.715104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.715125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.715155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.715178 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.818006 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.818078 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.818107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.818136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.818158 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.921124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.921210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.921229 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.921264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:00 crc kubenswrapper[4735]: I0131 15:00:00.921287 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:00Z","lastTransitionTime":"2026-01-31T15:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.024895 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.024956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.024981 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.025007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.025025 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.128555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.128643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.128668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.128696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.128721 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.231815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.231890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.231912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.231941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.231962 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.334327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.334377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.334390 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.334411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.334440 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.437732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.437839 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.437859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.437890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.437912 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.523482 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:42:37.174165667 +0000 UTC Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.539322 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.539776 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:01 crc kubenswrapper[4735]: E0131 15:00:01.539999 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.540044 4735 scope.go:117] "RemoveContainer" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" Jan 31 15:00:01 crc kubenswrapper[4735]: E0131 15:00:01.540077 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.547876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.547989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.548063 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.548176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.548214 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.652694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.653462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.653498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.653859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.653918 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.759368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.759414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.759442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.759461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.759473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.862849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.862904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.862917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.862942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.862955 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.967155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.967247 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.967273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.967310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:01 crc kubenswrapper[4735]: I0131 15:00:01.967337 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:01Z","lastTransitionTime":"2026-01-31T15:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.069639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.069676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.069687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.069704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.069717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.099613 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/2.log" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.101888 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.102604 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.120295 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.134301 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.149386 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.160713 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.171907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.171965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.171983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.172043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.172059 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.178347 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.191215 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.204405 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9f2db0-c125-46b2-bf7a-589ea9a6e183\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2381a88b1e6c4cfa839c8a6dc1592af9e494f34165ec26535d3bb7e92b1d7761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.249030 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.275050 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.275092 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.275105 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.275122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.275136 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.277877 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.298995 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.323279 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T15:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.343348 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.359211 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.378052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.378104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.378116 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.378138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.378155 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.379700 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.420628 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.441370 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.460921 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.475558 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.481658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.481686 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.481696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.481710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.481720 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.493949 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:02Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.524455 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:36:07.639182815 +0000 UTC Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.540012 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:02 crc kubenswrapper[4735]: E0131 15:00:02.540129 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.540319 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:02 crc kubenswrapper[4735]: E0131 15:00:02.540407 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.588883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.588968 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.588991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.589035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.589059 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.692499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.692562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.692581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.692607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.692627 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.795531 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.795596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.795607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.795633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.795647 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.899120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.899173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.899184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.899203 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:02 crc kubenswrapper[4735]: I0131 15:00:02.899216 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:02Z","lastTransitionTime":"2026-01-31T15:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.003132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.003208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.003287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.003322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.003342 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.106001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.106057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.106075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.106107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.106124 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.108840 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/3.log" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.109760 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/2.log" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.116847 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" exitCode=1 Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.116928 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.116973 4735 scope.go:117] "RemoveContainer" containerID="bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.117863 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:00:03 crc kubenswrapper[4735]: E0131 15:00:03.118074 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.145054 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.164396 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.183533 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.198458 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.209019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.209110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.209127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.209149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.209166 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.213279 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.232952 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.256990 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.271570 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.283659 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9f2db0-c125-46b2-bf7a-589ea9a6e183\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2381a88b1e6c4cfa839c8a6dc1592af9e494f34165ec26535d3bb7e92b1d7761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.309578 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.313013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.313034 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.313044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.313058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.313085 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.326747 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.342890 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.368503 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc9a3c317243e8fdb111871db115a3dcd79ba3afcfc2e3a5627bf2373e92e92b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:30Z\\\",\\\"message\\\":\\\"ft-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:59:30.505599 6408 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:59:30.505612 6408 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:59:30.505336 6408 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T15:00:02Z\\\",\\\"message\\\":\\\"usterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 15:00:02.506193 6850 services_controller.go:444] Built service openshift-ingress-canary/ingress-canary LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 15:00:02.506196 6850 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.12643601 seconds. No OVN measurement.\\\\nI0131 15:00:02.506202 6850 services_controller.go:445] Built service openshift-ingress-canary/ingress-canary LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 15:00:02.506216 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T15:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.384631 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.403846 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.415455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.415508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.415520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.415537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.415549 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.418580 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.432115 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.444279 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.456814 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:03Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.519011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.519060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.519075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.519097 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.519114 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.537066 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:23:15.158974989 +0000 UTC Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.539611 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.539679 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:03 crc kubenswrapper[4735]: E0131 15:00:03.539759 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:03 crc kubenswrapper[4735]: E0131 15:00:03.539880 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.626650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.626755 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.626779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.626813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.626835 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.731104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.731175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.731192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.731219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.731240 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.833475 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.833510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.833520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.833535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.833546 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.935903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.935972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.935994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.936024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:03 crc kubenswrapper[4735]: I0131 15:00:03.936044 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:03Z","lastTransitionTime":"2026-01-31T15:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.038675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.038742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.038764 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.038795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.038815 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.073363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.073510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.073529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.073552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.073570 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.094789 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.100585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.100656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.100674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.100699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.100716 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.119890 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.123526 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/3.log" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.125721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.125762 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.125779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.125801 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.125819 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.128982 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.129254 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.141939 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.148042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.148113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.148126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.148150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.148167 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.159064 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.164798 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.169453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.169550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.169578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.169617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.169643 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.180657 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.187575 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.187961 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.191032 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.191091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.191110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.191141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.191180 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.197565 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9f2db0-c125-46b2-bf7a-589ea9a6e183\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2381a88b1e6c4cfa839c8a6dc1592af9e494f34165ec26535d3bb7e92b1d7761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.232253 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.255328 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.279260 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.294825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.294874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.294889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.294907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.294922 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.315031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T15:00:02Z\\\",\\\"message\\\":\\\"usterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 15:00:02.506193 6850 services_controller.go:444] Built service openshift-ingress-canary/ingress-canary LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 15:00:02.506196 6850 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.12643601 seconds. No OVN measurement.\\\\nI0131 15:00:02.506202 6850 services_controller.go:445] Built service openshift-ingress-canary/ingress-canary LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 15:00:02.506216 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T15:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.336690 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.366265 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.389391 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.397587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.397631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.397640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.397659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.397674 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.404007 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.416508 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.427711 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.442532 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.457943 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.474552 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.487196 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.501762 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.501856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.501905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.501938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.501957 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.502768 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.520657 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:04Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.537890 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:27:16.837446995 +0000 UTC Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.539108 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.539197 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.539265 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:04 crc kubenswrapper[4735]: E0131 15:00:04.539322 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.605347 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.605404 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.605415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.605455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.605471 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.708286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.708345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.708354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.708374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.708385 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.811789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.811849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.811866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.811893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.811911 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.915473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.915542 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.915559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.915583 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:04 crc kubenswrapper[4735]: I0131 15:00:04.915603 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:04Z","lastTransitionTime":"2026-01-31T15:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.018667 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.018728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.018747 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.018772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.018791 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.122173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.122243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.122263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.122290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.122308 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.225649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.225722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.225742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.225771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.225792 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.329142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.329202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.329218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.329242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.329261 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.433337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.433418 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.433500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.433537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.433560 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.537273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.537338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.537357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.537383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.537402 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.538039 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:08:22.178587386 +0000 UTC Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.539756 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.539852 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:05 crc kubenswrapper[4735]: E0131 15:00:05.540038 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:05 crc kubenswrapper[4735]: E0131 15:00:05.540298 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.557410 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea6972a97e20bcab6c58347c757aad47b5e4647365daab46ead015a230d1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4db9f430dff9b1836b0666d83978e80de666543cba2c34bb490c600c7ae2242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.570901 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.584625 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02e5e783-f52a-4e06-8f10-675af8bf59f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045c49f13117e3e8fa2df60b635767ce80a5a4ebeeed47f9eafd83a06f63d4b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ea32712f339c08096cfa6b2b3d29a322b16223aaab6c9a002197105cfbc6e50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b65a2b0cc0c46ee4071cf6346097893ce95199c2c4637b2e62dc6e427febf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.599874 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"582442e0-b079-476d-849d-a4902306aba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef69b5cc094272ee1876c5eecb5d80da322716131b1581b60d65e77a53cacd77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwqkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gq77t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.617050 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q9r6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5d9418-4802-43d4-947e-6896f68d8a68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a5a622eeba306e79d6f5419a0b94ca6cec737ae7542b412cfc989da6fe90216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg47j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q9r6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.637526 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5c68aec-640e-4084-92f3-c627be884dbd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49d674b18da0cabf1e687e8989f21d045ee7288fcfbd1f12cd65a3cdaff1b947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ecce1e855f633255fd3a23fd1c64ccb21254c4fcbfd3cbdd16cc21a915c935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7834bc4851205b50aaa696294cf1d4cb1f89a8183fa43e0bc193f0960cbbc8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01c14721a17baa7642b9907ad54bdeabfba059386394e293a0f7c607a68b6218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.640107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.640165 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.640183 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.640218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.640248 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.654873 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.670913 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4bd5af2a0cac6ed6132e35f657028a10e5d001b849dd53780f3aaba204ab113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.690933 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gwdl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3527f2eb-24cf-4d43-911b-dfbd7afba999\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea9df063467e02ff6ecfbf65a16b0b2782a9fe65b0a80e5bbeceb0c8b10662b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g2h6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gwdl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.721402 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hg7gl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"671e4f66-1c2f-436a-800d-fd3840e9830d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:59:46Z\\\",\\\"message\\\":\\\"2026-01-31T14:59:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119\\\\n2026-01-31T14:59:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c959e243-b99d-494b-a687-b3b1b169e119 to /host/opt/cni/bin/\\\\n2026-01-31T14:59:01Z [verbose] multus-daemon started\\\\n2026-01-31T14:59:01Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:59:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbnwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hg7gl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.740416 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"811d304b-a115-4b54-a0dd-e6b9c33cae90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68f7c7573b1b49399bb7780d30217d558872c84f74207166945a1e2cc4b7f93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://905847089db645ced28ec05325f4ecd39d2b16c47ecd312a85e44c4af64e8df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpcrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4pv66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.744750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.744810 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.744828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.744862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.744886 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.766728 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de7d0b76-1231-4967-92b1-cea53a7a8492\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:58:52Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 14:58:52.457550 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 14:58:52.457830 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:58:52.458685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1907795107/tls.crt::/tmp/serving-cert-1907795107/tls.key\\\\\\\"\\\\nI0131 14:58:52.691274 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:58:52.697840 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:58:52.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:58:52.697910 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:58:52.697924 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:58:52.709459 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0131 14:58:52.709547 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:58:52.709631 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709640 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:58:52.709651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:58:52.709664 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 14:58:52.709670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:58:52.709675 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:58:52.714510 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.801223 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c829323-818b-42de-b7f8-959e7f0f1612\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55f2fc5d41f590b499afa3b8ef3440f5e59930ac8028d21ad1ec58132f777d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e36b12e8579fcf4cadffaa90fc72ca67d4cbefce46c3796b2ff37d57e47d512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62100710423e440bd9b44b49d8ed41f85f26aa8f2f75622c53ed288e41558c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc54fe7f1dee551f7f2c954941053ea403382fd17dbf2d21d81e77d34cbfd9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://047998cf4cfc5834400bbfe0e7b7040a6c6c2e224d84c6b61d373dc92a47d9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5377b1b831e1a5790e1715322b751b06fd11de58240e84dba0a2b71ef245d32d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b052711b01f66255d0ff30036a9669b24aa6f48f73ca2b0d8efe70c71f4299e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3acd016491e08f82578d38e55ace8add5cb4e4ec050ad1a3c341d01f104602a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.823363 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e236dfeb95137ff7cca1a468a9de7d4193311d80673f8c2f47ea6cfc2ab5054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.843082 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.848946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.849003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.849016 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.849035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.849050 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.871619 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0c86d4a-441f-4e3b-be28-632dadd81e81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T15:00:02Z\\\",\\\"message\\\":\\\"usterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 15:00:02.506193 6850 services_controller.go:444] Built service openshift-ingress-canary/ingress-canary LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 15:00:02.506196 6850 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.12643601 seconds. No OVN measurement.\\\\nI0131 15:00:02.506202 6850 services_controller.go:445] Built service openshift-ingress-canary/ingress-canary LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 15:00:02.506216 6850 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T15:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6td7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2c6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.898116 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28589772-80ef-4a88-9b68-eb15b241ef7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e46de8dec47bd71102d2365d24f26687295d93e081dde2e9c637f962825fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:59:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://913a0cd3e60157a1284313fbb3314d9c2fdaa3e19bc17ddfd420f79a39e774a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba6c851d86fc834b023406718a58e4a947ad4f2c24e8d385e406c8883990dff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d414d29c145684c7b756a8db6a60f2348ff88fed0c5410c64d609ac29bc57c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a4837e710d31c0cb55ee018ccbf7fd353af38d51da6c1882f52d326074ac3b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0632f013be88687c216f180d364a578c83a161603a2bc7f63d2197728efcb3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a8e5954b398b60b2ecf17e5dac391994155f66db9e084efe23c38eabf7ec9cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:59:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:59:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ck7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.914867 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea89cfa6-d46d-4cda-a91e-a1d06a743204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:59:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5g47k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:59:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqxxz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.931391 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9f2db0-c125-46b2-bf7a-589ea9a6e183\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:58:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2381a88b1e6c4cfa839c8a6dc1592af9e494f34165ec26535d3bb7e92b1d7761\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0513a6b8123f85802861a881f783b224364b0cad817325edf5d20a6814e40a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:58:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:58:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:58:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:05Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.952521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.952557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.952570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.952589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:05 crc kubenswrapper[4735]: I0131 15:00:05.952603 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:05Z","lastTransitionTime":"2026-01-31T15:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.055255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.055316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.055336 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.055364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.055384 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.159544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.159630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.159650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.159683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.159710 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.264724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.264805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.264826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.264857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.264881 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.368333 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.368468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.368497 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.368532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.368565 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.471690 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.471763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.471782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.471806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.471826 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.539120 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:58:51.177461049 +0000 UTC Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.539295 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.539295 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:06 crc kubenswrapper[4735]: E0131 15:00:06.539518 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:06 crc kubenswrapper[4735]: E0131 15:00:06.539645 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.574724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.574780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.574797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.574818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.574839 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.678643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.678715 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.678736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.678764 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.678786 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.782310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.782387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.782407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.782465 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.782486 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.886305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.886372 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.886389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.886419 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.886505 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.989203 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.989270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.989289 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.989314 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:06 crc kubenswrapper[4735]: I0131 15:00:06.989332 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:06Z","lastTransitionTime":"2026-01-31T15:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.092250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.092324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.092345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.092368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.092386 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.195267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.195356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.195413 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.195474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.195492 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.299152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.299212 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.299225 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.299243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.299257 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.402568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.402634 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.402651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.402674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.402691 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.505476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.505550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.505568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.505595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.505614 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.539554 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:23:26.164413072 +0000 UTC Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.539784 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.539832 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:07 crc kubenswrapper[4735]: E0131 15:00:07.540046 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:07 crc kubenswrapper[4735]: E0131 15:00:07.540202 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.608675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.608747 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.608768 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.608800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.608821 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.712308 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.712364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.712380 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.712404 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.712445 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.815945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.816029 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.816057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.816089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.816113 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.919368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.919468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.919491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.919521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:07 crc kubenswrapper[4735]: I0131 15:00:07.919542 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:07Z","lastTransitionTime":"2026-01-31T15:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.022274 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.022342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.022366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.022397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.022418 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.125756 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.125840 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.125859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.125888 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.125940 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.228642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.228726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.228751 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.228785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.228808 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.331909 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.332005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.332018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.332047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.332062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.435245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.435285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.435293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.435309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.435320 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.538626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.538690 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.538701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.538725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.538738 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.539063 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.539178 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:08 crc kubenswrapper[4735]: E0131 15:00:08.539239 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:08 crc kubenswrapper[4735]: E0131 15:00:08.539465 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.540747 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:12:04.345620312 +0000 UTC Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.642118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.642193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.642218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.642251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.642282 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.745104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.745138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.745149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.745163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.745172 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.848609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.848671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.848683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.848712 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.848726 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.951369 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.951442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.951453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.951471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:08 crc kubenswrapper[4735]: I0131 15:00:08.951481 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:08Z","lastTransitionTime":"2026-01-31T15:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.055312 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.055401 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.055463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.055507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.055534 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.158491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.158597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.158615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.158639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.158656 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.262035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.262125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.262143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.262164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.262175 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.365537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.365606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.365623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.365647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.365665 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.468072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.468116 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.468125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.468142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.468152 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.539737 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:09 crc kubenswrapper[4735]: E0131 15:00:09.539931 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.540024 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:09 crc kubenswrapper[4735]: E0131 15:00:09.540276 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.540958 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:30:12.729914909 +0000 UTC Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.571583 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.571639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.571654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.571674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.571691 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.674141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.674187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.674196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.674213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.674223 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.778158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.778232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.778250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.778281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.778298 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.882190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.882241 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.882253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.882276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.882292 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.985987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.986053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.986072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.986098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:09 crc kubenswrapper[4735]: I0131 15:00:09.986119 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:09Z","lastTransitionTime":"2026-01-31T15:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.091868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.091930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.091947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.091974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.091993 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.195817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.195880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.195894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.195924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.195939 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.300414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.300505 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.300522 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.300552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.300571 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.403688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.403760 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.403779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.403811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.403831 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.508341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.508409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.508458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.508487 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.508507 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.539988 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.540098 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:10 crc kubenswrapper[4735]: E0131 15:00:10.540646 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.541257 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:19:57.040582514 +0000 UTC Jan 31 15:00:10 crc kubenswrapper[4735]: E0131 15:00:10.541827 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.612775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.612854 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.612880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.612910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.612932 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.716811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.716901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.716918 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.716941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.716958 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.820627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.820702 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.820721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.820750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.820771 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.924474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.924552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.924574 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.924605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:10 crc kubenswrapper[4735]: I0131 15:00:10.924627 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:10Z","lastTransitionTime":"2026-01-31T15:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.029238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.029313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.029334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.029364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.029383 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.133350 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.133474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.133501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.133531 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.133558 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.237107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.237183 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.237204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.237232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.237251 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.340550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.340636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.340661 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.340698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.340723 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.445174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.445243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.445257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.445281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.445296 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.539853 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.539922 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:11 crc kubenswrapper[4735]: E0131 15:00:11.540050 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:11 crc kubenswrapper[4735]: E0131 15:00:11.540169 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.542054 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:42:18.618347512 +0000 UTC Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.547503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.547539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.547551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.547568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.547580 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.650825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.650890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.650904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.650927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.650942 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.754238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.754304 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.754316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.754339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.754353 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.857464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.857536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.857556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.857586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.857605 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.960917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.961009 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.961040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.961070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:11 crc kubenswrapper[4735]: I0131 15:00:11.961092 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:11Z","lastTransitionTime":"2026-01-31T15:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.065045 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.065098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.065111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.065133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.065147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.168236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.168305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.168324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.168349 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.168367 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.272073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.272144 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.272163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.272192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.272212 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.374795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.374890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.374906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.374925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.374937 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.478090 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.478156 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.478174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.478201 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.478225 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.539360 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.539514 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:12 crc kubenswrapper[4735]: E0131 15:00:12.539671 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:12 crc kubenswrapper[4735]: E0131 15:00:12.539808 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.542747 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:47:06.261647746 +0000 UTC Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.581806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.581866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.581879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.581904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.581925 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.685281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.685346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.685366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.685395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.685453 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.789482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.789543 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.789561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.789587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.789605 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.892782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.892853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.892873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.892897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.892913 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.996311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.996478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.996502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.996536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:12 crc kubenswrapper[4735]: I0131 15:00:12.996558 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:12Z","lastTransitionTime":"2026-01-31T15:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.100479 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.100540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.100559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.100585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.100603 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.203947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.204024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.204040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.204064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.204080 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.308321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.308387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.308412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.308480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.308505 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.411868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.411932 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.411956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.411988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.412010 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.514551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.514618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.514636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.514662 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.514682 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.539618 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.539698 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:13 crc kubenswrapper[4735]: E0131 15:00:13.539795 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:13 crc kubenswrapper[4735]: E0131 15:00:13.540009 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.543597 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:14:54.486637259 +0000 UTC Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.618019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.618085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.618109 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.618139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.618163 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.720710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.720777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.720793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.720819 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.720837 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.824564 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.824656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.824682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.824718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.824747 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.928452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.928512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.928529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.928557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:13 crc kubenswrapper[4735]: I0131 15:00:13.928575 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:13Z","lastTransitionTime":"2026-01-31T15:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.031475 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.031532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.031550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.031572 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.031588 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.135157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.135525 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.135644 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.135825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.135911 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.239512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.239570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.239589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.239614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.239631 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.343107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.343540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.343743 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.343928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.344086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.380056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.380120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.380138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.380164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.380184 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.401947 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:14Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.408717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.408777 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.408795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.408820 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.408839 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.434561 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:14Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.443626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.443673 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.443691 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.443714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.443734 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.466598 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:14Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.472255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.472459 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.472565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.472668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.472773 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.494922 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:14Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.500079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.500162 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.500181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.500210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.500232 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.521313 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T15:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2870dd97-bffb-460f-a7a6-a5d63988938d\\\",\\\"systemUUID\\\":\\\"3e901e4e-05cc-4e58-82f7-0308e6f65229\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T15:00:14Z is after 2025-08-24T17:21:41Z" Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.521569 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.523774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.523835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.523856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.523884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.523905 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.539224 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.539235 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.539636 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:14 crc kubenswrapper[4735]: E0131 15:00:14.539758 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.544227 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:21:10.876849698 +0000 UTC Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.627237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.627284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.627301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.627327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.627346 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.731299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.731356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.731374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.731399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.731417 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.835267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.835305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.835325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.835344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.835357 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.938612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.938660 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.938672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.938690 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:14 crc kubenswrapper[4735]: I0131 15:00:14.938702 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:14Z","lastTransitionTime":"2026-01-31T15:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.041693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.041771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.041788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.041818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.041856 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.145781 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.145856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.145876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.145903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.145926 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.249548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.249612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.249630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.249657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.249680 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.353838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.353910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.353928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.353957 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.353980 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.458376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.458480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.458501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.458529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.458549 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.539607 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.539761 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:15 crc kubenswrapper[4735]: E0131 15:00:15.539896 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:15 crc kubenswrapper[4735]: E0131 15:00:15.540181 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.544823 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:49:17.775218321 +0000 UTC Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.561795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.561846 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.561864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.561887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.561904 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.602306 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.602283249 podStartE2EDuration="1m19.602283249s" podCreationTimestamp="2026-01-31 14:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.580300096 +0000 UTC m=+101.353629168" watchObservedRunningTime="2026-01-31 15:00:15.602283249 +0000 UTC m=+101.375612331" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.646571 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gwdl8" podStartSLOduration=78.643299215 podStartE2EDuration="1m18.643299215s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.641088778 +0000 UTC m=+101.414417890" watchObservedRunningTime="2026-01-31 15:00:15.643299215 +0000 UTC m=+101.416628357" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.663481 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hg7gl" podStartSLOduration=78.663408511 podStartE2EDuration="1m18.663408511s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.662981678 +0000 UTC m=+101.436310810" watchObservedRunningTime="2026-01-31 15:00:15.663408511 +0000 UTC m=+101.436737593" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.665031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.665102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.665170 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.665204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.665227 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.709906 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4pv66" podStartSLOduration=77.709882051 podStartE2EDuration="1m17.709882051s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.6919082 +0000 UTC m=+101.465237302" watchObservedRunningTime="2026-01-31 15:00:15.709882051 +0000 UTC m=+101.483211103" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.710328 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.710323664 podStartE2EDuration="15.710323664s" podCreationTimestamp="2026-01-31 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.709990484 +0000 UTC m=+101.483319556" watchObservedRunningTime="2026-01-31 15:00:15.710323664 +0000 UTC m=+101.483652706" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.753073 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.753049182 podStartE2EDuration="1m19.753049182s" podCreationTimestamp="2026-01-31 14:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.750814375 +0000 UTC m=+101.524143427" watchObservedRunningTime="2026-01-31 15:00:15.753049182 +0000 UTC m=+101.526378234" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.768309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.768401 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.768458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.768494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.768515 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.871010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.871084 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.871101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.871129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.871149 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.880686 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ck7n9" podStartSLOduration=78.880658197 podStartE2EDuration="1m18.880658197s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.878404379 +0000 UTC m=+101.651733491" watchObservedRunningTime="2026-01-31 15:00:15.880658197 +0000 UTC m=+101.653987279" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.942701 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.942658505 podStartE2EDuration="1m15.942658505s" podCreationTimestamp="2026-01-31 14:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.942463169 +0000 UTC m=+101.715792251" watchObservedRunningTime="2026-01-31 15:00:15.942658505 +0000 UTC m=+101.715987557" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.973278 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.973345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.973357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.973374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.973387 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:15Z","lastTransitionTime":"2026-01-31T15:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:15 crc kubenswrapper[4735]: I0131 15:00:15.988018 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.987991951 podStartE2EDuration="51.987991951s" podCreationTimestamp="2026-01-31 14:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:15.987231349 +0000 UTC m=+101.760560401" watchObservedRunningTime="2026-01-31 15:00:15.987991951 +0000 UTC m=+101.761320993" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.003107 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podStartSLOduration=79.003079006 podStartE2EDuration="1m19.003079006s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:16.002944102 +0000 UTC m=+101.776273144" watchObservedRunningTime="2026-01-31 15:00:16.003079006 +0000 UTC m=+101.776408058" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.007312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:16 crc kubenswrapper[4735]: E0131 15:00:16.007499 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 15:00:16 crc kubenswrapper[4735]: E0131 15:00:16.007591 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs podName:ea89cfa6-d46d-4cda-a91e-a1d06a743204 nodeName:}" failed. No retries permitted until 2026-01-31 15:01:20.007570561 +0000 UTC m=+165.780899623 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs") pod "network-metrics-daemon-rqxxz" (UID: "ea89cfa6-d46d-4cda-a91e-a1d06a743204") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.013820 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q9r6v" podStartSLOduration=79.013800159 podStartE2EDuration="1m19.013800159s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:16.012780158 +0000 UTC m=+101.786109220" watchObservedRunningTime="2026-01-31 15:00:16.013800159 +0000 UTC m=+101.787129201" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.075710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.075769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.075785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.075808 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.075826 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.178048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.178129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.178155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.178189 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.178217 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.282000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.282062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.282082 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.282110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.282131 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.385714 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.385775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.385787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.385810 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.385827 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.488592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.488640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.488673 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.488693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.488707 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.539533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.539649 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:16 crc kubenswrapper[4735]: E0131 15:00:16.539714 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:16 crc kubenswrapper[4735]: E0131 15:00:16.539856 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.541695 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:00:16 crc kubenswrapper[4735]: E0131 15:00:16.542198 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.545168 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:09:07.23963606 +0000 UTC Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.592979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.593095 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.593115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.593182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.593203 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.698025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.698110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.698128 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.698169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.698197 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.802079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.802150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.802168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.802195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.802214 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.905599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.905676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.905701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.905731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:16 crc kubenswrapper[4735]: I0131 15:00:16.905753 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:16Z","lastTransitionTime":"2026-01-31T15:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.009742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.009821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.009845 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.009876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.009894 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.113514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.113578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.113595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.113620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.113638 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.216838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.216893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.216915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.216942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.216963 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.320300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.320369 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.320385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.320409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.320476 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.423364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.423454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.423479 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.423511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.423532 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.527057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.527116 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.527133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.527157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.527176 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.539749 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.539800 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:17 crc kubenswrapper[4735]: E0131 15:00:17.539934 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:17 crc kubenswrapper[4735]: E0131 15:00:17.540125 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.545769 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:42:09.266564238 +0000 UTC Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.629961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.630022 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.630047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.630077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.630100 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.733606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.733692 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.733713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.733740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.733761 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.837404 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.837524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.837546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.837612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.837642 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.940578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.940657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.940683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.940710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:17 crc kubenswrapper[4735]: I0131 15:00:17.940730 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:17Z","lastTransitionTime":"2026-01-31T15:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.044455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.044535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.044552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.044580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.044604 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.147993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.148043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.148064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.148093 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.148116 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.250540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.250614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.250633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.250657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.250677 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.353863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.353926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.353943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.353967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.353985 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.457195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.457254 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.457270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.457293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.457309 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.539233 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.539239 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:18 crc kubenswrapper[4735]: E0131 15:00:18.539508 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:18 crc kubenswrapper[4735]: E0131 15:00:18.539623 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.546610 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:40:40.762218832 +0000 UTC Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.560185 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.560236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.560255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.560285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.560302 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.663234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.663302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.663320 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.663345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.663366 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.767613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.767697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.767720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.767750 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.767772 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.871844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.871920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.871954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.871985 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.872009 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.974804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.974869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.974887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.974910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:18 crc kubenswrapper[4735]: I0131 15:00:18.974929 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:18Z","lastTransitionTime":"2026-01-31T15:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.078332 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.078397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.078414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.078495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.078536 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.181858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.181923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.181941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.181970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.181988 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.284190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.284218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.284226 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.284238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.284247 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.387535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.387600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.387617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.387640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.387657 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.490810 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.490954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.490974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.491003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.491020 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.539740 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.539740 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:19 crc kubenswrapper[4735]: E0131 15:00:19.540065 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:19 crc kubenswrapper[4735]: E0131 15:00:19.540178 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.546741 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:42:55.172299046 +0000 UTC Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.594368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.594436 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.594445 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.594461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.594471 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.698122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.698196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.698214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.698239 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.698257 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.801858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.801916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.801936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.801960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.801981 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.905728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.905785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.905805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.905830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:19 crc kubenswrapper[4735]: I0131 15:00:19.905849 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:19Z","lastTransitionTime":"2026-01-31T15:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.009233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.009277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.009288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.009306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.009317 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.112829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.112903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.112922 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.112951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.112969 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.215606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.215672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.215698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.215732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.215756 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.318619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.318700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.318726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.318761 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.318783 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.422589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.422661 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.422680 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.422705 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.422747 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.526647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.526794 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.526814 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.526843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.526864 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.539143 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.539153 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:20 crc kubenswrapper[4735]: E0131 15:00:20.539329 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:20 crc kubenswrapper[4735]: E0131 15:00:20.539469 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.547209 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:22:09.766217659 +0000 UTC Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.629864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.629936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.629954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.629980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.630001 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.733783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.733849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.733866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.733891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.733909 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.837581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.837663 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.837695 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.837728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.837751 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.941234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.941301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.941327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.941359 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:20 crc kubenswrapper[4735]: I0131 15:00:20.941389 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:20Z","lastTransitionTime":"2026-01-31T15:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.044456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.044529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.044554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.044584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.044609 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.147904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.147999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.148021 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.148049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.148073 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.251497 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.251563 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.251581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.251606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.251628 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.355841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.355903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.355920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.355954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.355972 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.459493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.459582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.459601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.459626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.459646 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.539294 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.539392 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:21 crc kubenswrapper[4735]: E0131 15:00:21.539579 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:21 crc kubenswrapper[4735]: E0131 15:00:21.539729 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.547862 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:51:16.979847674 +0000 UTC Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.562869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.562925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.562944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.562973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.562992 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.666139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.666207 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.666230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.666275 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.666299 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.770031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.770095 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.770109 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.770131 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.770143 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.873875 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.873938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.873957 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.873983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.873999 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.978287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.978343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.978360 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.978383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:21 crc kubenswrapper[4735]: I0131 15:00:21.978400 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:21Z","lastTransitionTime":"2026-01-31T15:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.082338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.082403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.082454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.082481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.082502 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.185510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.185584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.185616 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.185636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.185647 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.288647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.288715 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.288734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.288756 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.288772 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.391855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.391925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.391936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.391954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.391965 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.494791 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.494866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.494883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.494908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.494925 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.539504 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.539562 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:22 crc kubenswrapper[4735]: E0131 15:00:22.539717 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:22 crc kubenswrapper[4735]: E0131 15:00:22.539869 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.548523 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:56:27.440180268 +0000 UTC Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.597544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.597603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.597621 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.597643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.597660 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.700166 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.700207 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.700219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.700237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.700249 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.803306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.803385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.803403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.803454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.803473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.906477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.906539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.906555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.906582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:22 crc kubenswrapper[4735]: I0131 15:00:22.906599 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:22Z","lastTransitionTime":"2026-01-31T15:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.008897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.008950 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.008965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.008988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.009003 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.112232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.112272 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.112283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.112300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.112314 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.215981 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.216038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.216057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.216081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.216098 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.319562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.319636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.319653 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.320114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.320169 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.424654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.424689 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.424698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.424713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.424723 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.527860 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.527916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.527928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.527945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.528260 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.539764 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.539919 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:23 crc kubenswrapper[4735]: E0131 15:00:23.539998 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:23 crc kubenswrapper[4735]: E0131 15:00:23.540183 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.549358 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:35:59.278097335 +0000 UTC Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.631570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.631627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.631641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.631664 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.631679 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.735025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.735099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.735119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.735148 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.735165 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.837882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.837959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.837977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.838005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.838077 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.941490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.941559 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.941576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.941601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:23 crc kubenswrapper[4735]: I0131 15:00:23.941621 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:23Z","lastTransitionTime":"2026-01-31T15:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.044548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.044623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.044642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.044668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.044688 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.148413 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.148549 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.148580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.148616 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.148642 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.252677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.252767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.252789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.252828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.252856 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.357289 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.357379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.357397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.357463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.357496 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.460189 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.460244 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.460260 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.460285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.460302 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.539326 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.539384 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:24 crc kubenswrapper[4735]: E0131 15:00:24.539831 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:24 crc kubenswrapper[4735]: E0131 15:00:24.539982 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.550351 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:03:50.43776392 +0000 UTC Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.562643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.562717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.562736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.562764 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.562783 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.666284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.666378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.666396 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.666460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.666517 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.770037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.770133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.770167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.770206 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.770229 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.874258 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.874319 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.874332 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.874352 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.874368 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.914360 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.914452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.914463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.914483 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.914496 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T15:00:24Z","lastTransitionTime":"2026-01-31T15:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.982299 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb"] Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.982854 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.986589 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.986919 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.987030 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 15:00:24 crc kubenswrapper[4735]: I0131 15:00:24.989004 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.118313 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42b78a33-856a-4265-ba66-2732dc8165c2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.118417 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b78a33-856a-4265-ba66-2732dc8165c2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.118500 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b78a33-856a-4265-ba66-2732dc8165c2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.118555 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.118594 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.219871 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42b78a33-856a-4265-ba66-2732dc8165c2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.219962 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b78a33-856a-4265-ba66-2732dc8165c2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.220019 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b78a33-856a-4265-ba66-2732dc8165c2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.220056 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.220089 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.220196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.220319 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/42b78a33-856a-4265-ba66-2732dc8165c2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.221318 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42b78a33-856a-4265-ba66-2732dc8165c2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.229927 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b78a33-856a-4265-ba66-2732dc8165c2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.251250 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b78a33-856a-4265-ba66-2732dc8165c2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xgvjb\" (UID: \"42b78a33-856a-4265-ba66-2732dc8165c2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.314094 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.541641 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.541686 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:25 crc kubenswrapper[4735]: E0131 15:00:25.544288 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:25 crc kubenswrapper[4735]: E0131 15:00:25.544439 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.553491 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:23:19.173338299 +0000 UTC Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.553597 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 15:00:25 crc kubenswrapper[4735]: I0131 15:00:25.562797 4735 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 15:00:26 crc kubenswrapper[4735]: I0131 15:00:26.216519 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" event={"ID":"42b78a33-856a-4265-ba66-2732dc8165c2","Type":"ContainerStarted","Data":"e409331d6e37daaec4c128e59d292c2e27744fa7a4b02ba95847cb02c9b439c3"} Jan 31 15:00:26 crc kubenswrapper[4735]: I0131 15:00:26.216719 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" event={"ID":"42b78a33-856a-4265-ba66-2732dc8165c2","Type":"ContainerStarted","Data":"759fd9a2b625db23ac09ef3742387fbca9ba9444249d610a63c340142169e602"} Jan 31 15:00:26 crc kubenswrapper[4735]: I0131 15:00:26.242835 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xgvjb" podStartSLOduration=89.242797867 podStartE2EDuration="1m29.242797867s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:26.241372594 +0000 UTC m=+112.014701696" watchObservedRunningTime="2026-01-31 15:00:26.242797867 +0000 UTC m=+112.016126949" Jan 31 15:00:26 crc kubenswrapper[4735]: I0131 15:00:26.539095 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:26 crc kubenswrapper[4735]: E0131 15:00:26.539318 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:26 crc kubenswrapper[4735]: I0131 15:00:26.539750 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:26 crc kubenswrapper[4735]: E0131 15:00:26.539912 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:27 crc kubenswrapper[4735]: I0131 15:00:27.539484 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:27 crc kubenswrapper[4735]: I0131 15:00:27.539529 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:27 crc kubenswrapper[4735]: E0131 15:00:27.539684 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:27 crc kubenswrapper[4735]: E0131 15:00:27.539851 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:28 crc kubenswrapper[4735]: I0131 15:00:28.539253 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:28 crc kubenswrapper[4735]: I0131 15:00:28.539315 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:28 crc kubenswrapper[4735]: E0131 15:00:28.539388 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:28 crc kubenswrapper[4735]: E0131 15:00:28.539555 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:29 crc kubenswrapper[4735]: I0131 15:00:29.539529 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:29 crc kubenswrapper[4735]: E0131 15:00:29.539924 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:29 crc kubenswrapper[4735]: I0131 15:00:29.540015 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:29 crc kubenswrapper[4735]: E0131 15:00:29.540663 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:29 crc kubenswrapper[4735]: I0131 15:00:29.541561 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:00:29 crc kubenswrapper[4735]: E0131 15:00:29.541876 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2c6zv_openshift-ovn-kubernetes(b0c86d4a-441f-4e3b-be28-632dadd81e81)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" Jan 31 15:00:30 crc kubenswrapper[4735]: I0131 15:00:30.539924 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:30 crc kubenswrapper[4735]: I0131 15:00:30.540052 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:30 crc kubenswrapper[4735]: E0131 15:00:30.540131 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:30 crc kubenswrapper[4735]: E0131 15:00:30.540365 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:31 crc kubenswrapper[4735]: I0131 15:00:31.539204 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:31 crc kubenswrapper[4735]: E0131 15:00:31.539404 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:31 crc kubenswrapper[4735]: I0131 15:00:31.539856 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:31 crc kubenswrapper[4735]: E0131 15:00:31.539965 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:32 crc kubenswrapper[4735]: I0131 15:00:32.539544 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:32 crc kubenswrapper[4735]: I0131 15:00:32.539644 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:32 crc kubenswrapper[4735]: E0131 15:00:32.539763 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:32 crc kubenswrapper[4735]: E0131 15:00:32.539888 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.246343 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/1.log" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.247686 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/0.log" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.247783 4735 generic.go:334] "Generic (PLEG): container finished" podID="671e4f66-1c2f-436a-800d-fd3840e9830d" containerID="c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5" exitCode=1 Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.247963 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerDied","Data":"c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5"} Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.248276 4735 scope.go:117] "RemoveContainer" containerID="f29972aa6ec3b215a7ab4bfaefcac507bf423a7af15a2849fda09f2e64bed69b" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.249195 4735 scope.go:117] "RemoveContainer" containerID="c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5" Jan 31 15:00:33 crc kubenswrapper[4735]: E0131 15:00:33.249588 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hg7gl_openshift-multus(671e4f66-1c2f-436a-800d-fd3840e9830d)\"" pod="openshift-multus/multus-hg7gl" podUID="671e4f66-1c2f-436a-800d-fd3840e9830d" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.540091 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:33 crc kubenswrapper[4735]: I0131 15:00:33.540102 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:33 crc kubenswrapper[4735]: E0131 15:00:33.540824 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:33 crc kubenswrapper[4735]: E0131 15:00:33.540971 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:34 crc kubenswrapper[4735]: I0131 15:00:34.254349 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/1.log" Jan 31 15:00:34 crc kubenswrapper[4735]: I0131 15:00:34.540006 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:34 crc kubenswrapper[4735]: E0131 15:00:34.540238 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:34 crc kubenswrapper[4735]: I0131 15:00:34.540622 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:34 crc kubenswrapper[4735]: E0131 15:00:34.540895 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:35 crc kubenswrapper[4735]: E0131 15:00:35.436247 4735 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 15:00:35 crc kubenswrapper[4735]: I0131 15:00:35.539150 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:35 crc kubenswrapper[4735]: I0131 15:00:35.539236 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:35 crc kubenswrapper[4735]: E0131 15:00:35.540816 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:35 crc kubenswrapper[4735]: E0131 15:00:35.540937 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:35 crc kubenswrapper[4735]: E0131 15:00:35.916289 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:00:36 crc kubenswrapper[4735]: I0131 15:00:36.539375 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:36 crc kubenswrapper[4735]: I0131 15:00:36.539417 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:36 crc kubenswrapper[4735]: E0131 15:00:36.539627 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:36 crc kubenswrapper[4735]: E0131 15:00:36.539764 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:37 crc kubenswrapper[4735]: I0131 15:00:37.540046 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:37 crc kubenswrapper[4735]: I0131 15:00:37.540084 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:37 crc kubenswrapper[4735]: E0131 15:00:37.540255 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:37 crc kubenswrapper[4735]: E0131 15:00:37.540359 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:38 crc kubenswrapper[4735]: I0131 15:00:38.539309 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:38 crc kubenswrapper[4735]: I0131 15:00:38.539414 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:38 crc kubenswrapper[4735]: E0131 15:00:38.539531 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:38 crc kubenswrapper[4735]: E0131 15:00:38.539708 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:39 crc kubenswrapper[4735]: I0131 15:00:39.539600 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:39 crc kubenswrapper[4735]: I0131 15:00:39.539890 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:39 crc kubenswrapper[4735]: E0131 15:00:39.540142 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:39 crc kubenswrapper[4735]: E0131 15:00:39.540733 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:40 crc kubenswrapper[4735]: I0131 15:00:40.539287 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:40 crc kubenswrapper[4735]: I0131 15:00:40.539651 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:40 crc kubenswrapper[4735]: E0131 15:00:40.539876 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:40 crc kubenswrapper[4735]: E0131 15:00:40.540096 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:40 crc kubenswrapper[4735]: E0131 15:00:40.917505 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:00:41 crc kubenswrapper[4735]: I0131 15:00:41.539250 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:41 crc kubenswrapper[4735]: I0131 15:00:41.539372 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:41 crc kubenswrapper[4735]: E0131 15:00:41.539385 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:41 crc kubenswrapper[4735]: E0131 15:00:41.539666 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:42 crc kubenswrapper[4735]: I0131 15:00:42.539240 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:42 crc kubenswrapper[4735]: I0131 15:00:42.539310 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:42 crc kubenswrapper[4735]: E0131 15:00:42.539398 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:42 crc kubenswrapper[4735]: E0131 15:00:42.539619 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:43 crc kubenswrapper[4735]: I0131 15:00:43.539296 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:43 crc kubenswrapper[4735]: E0131 15:00:43.540716 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:43 crc kubenswrapper[4735]: I0131 15:00:43.539399 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:43 crc kubenswrapper[4735]: E0131 15:00:43.541164 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:44 crc kubenswrapper[4735]: I0131 15:00:44.540026 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:44 crc kubenswrapper[4735]: I0131 15:00:44.540146 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:44 crc kubenswrapper[4735]: I0131 15:00:44.540729 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:00:44 crc kubenswrapper[4735]: E0131 15:00:44.540791 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:44 crc kubenswrapper[4735]: E0131 15:00:44.541105 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.301989 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/3.log" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.307347 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerStarted","Data":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.307994 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.539561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.542256 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.542340 4735 scope.go:117] "RemoveContainer" containerID="c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5" Jan 31 15:00:45 crc kubenswrapper[4735]: E0131 15:00:45.542476 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:45 crc kubenswrapper[4735]: E0131 15:00:45.542587 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.574949 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podStartSLOduration=107.574911797 podStartE2EDuration="1m47.574911797s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:45.343183037 +0000 UTC m=+131.116512149" watchObservedRunningTime="2026-01-31 15:00:45.574911797 +0000 UTC m=+131.348240889" Jan 31 15:00:45 crc kubenswrapper[4735]: I0131 15:00:45.690317 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqxxz"] Jan 31 15:00:45 crc kubenswrapper[4735]: E0131 15:00:45.918364 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:00:46 crc kubenswrapper[4735]: I0131 15:00:46.314290 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/1.log" Jan 31 15:00:46 crc kubenswrapper[4735]: I0131 15:00:46.314394 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:46 crc kubenswrapper[4735]: E0131 15:00:46.314571 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:46 crc kubenswrapper[4735]: I0131 15:00:46.314807 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerStarted","Data":"c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7"} Jan 31 15:00:46 crc kubenswrapper[4735]: I0131 15:00:46.539178 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:46 crc kubenswrapper[4735]: E0131 15:00:46.539307 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:46 crc kubenswrapper[4735]: I0131 15:00:46.539399 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:46 crc kubenswrapper[4735]: E0131 15:00:46.539721 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:47 crc kubenswrapper[4735]: I0131 15:00:47.539044 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:47 crc kubenswrapper[4735]: E0131 15:00:47.539669 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:48 crc kubenswrapper[4735]: I0131 15:00:48.539125 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:48 crc kubenswrapper[4735]: I0131 15:00:48.539222 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:48 crc kubenswrapper[4735]: E0131 15:00:48.539324 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:48 crc kubenswrapper[4735]: E0131 15:00:48.539496 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:48 crc kubenswrapper[4735]: I0131 15:00:48.539242 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:48 crc kubenswrapper[4735]: E0131 15:00:48.539747 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:49 crc kubenswrapper[4735]: I0131 15:00:49.196529 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 15:00:49 crc kubenswrapper[4735]: I0131 15:00:49.539127 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:49 crc kubenswrapper[4735]: E0131 15:00:49.539329 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 15:00:50 crc kubenswrapper[4735]: I0131 15:00:50.539750 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:50 crc kubenswrapper[4735]: I0131 15:00:50.539801 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:50 crc kubenswrapper[4735]: I0131 15:00:50.539823 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:50 crc kubenswrapper[4735]: E0131 15:00:50.539940 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqxxz" podUID="ea89cfa6-d46d-4cda-a91e-a1d06a743204" Jan 31 15:00:50 crc kubenswrapper[4735]: E0131 15:00:50.540119 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 15:00:50 crc kubenswrapper[4735]: E0131 15:00:50.540182 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 15:00:51 crc kubenswrapper[4735]: I0131 15:00:51.540032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:00:51 crc kubenswrapper[4735]: I0131 15:00:51.543964 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 15:00:51 crc kubenswrapper[4735]: I0131 15:00:51.545064 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.539479 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.539541 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.539607 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.544096 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.544135 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.544236 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 15:00:52 crc kubenswrapper[4735]: I0131 15:00:52.544355 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.537209 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.590067 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.591005 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.598534 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.599376 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.599606 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.599868 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.600906 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.600947 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.601695 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.602111 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.602218 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.602360 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.603159 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.603185 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.603231 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9q4gk"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.603174 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.609671 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.611078 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.614554 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-875xp"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.618467 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.618753 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.619010 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.633601 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.635818 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.636522 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.636882 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.637254 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.638197 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.638547 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.638730 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.639209 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.639390 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.639605 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.639677 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxhdb"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.640183 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.640693 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.641350 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.641606 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.641876 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.642023 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.642174 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.642306 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.644651 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.644977 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.645504 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.645846 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.646212 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.646247 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.646524 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.646800 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.649948 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.650498 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.657569 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.657608 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.657646 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.657804 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.657920 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.658024 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.664460 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.664798 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.683646 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.685486 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.707629 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.707869 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.713706 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738190 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738538 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738584 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8jh\" (UniqueName: \"kubernetes.io/projected/cc0df5cf-ade5-423e-9c9e-d389228c7246-kube-api-access-gt8jh\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738610 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738629 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738647 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738670 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738689 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c7479e59-0549-41c5-99f5-e56b81a9f9a5-machine-approver-tls\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738757 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knjb\" (UniqueName: \"kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738781 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6hb\" (UniqueName: \"kubernetes.io/projected/f4f37354-52b0-4164-a33c-65aa16618732-kube-api-access-hz6hb\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738802 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-auth-proxy-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738821 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738867 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-audit\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738921 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.738989 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739068 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739094 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739170 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-images\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739187 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4340fb1-7455-4140-9c75-2d075ea0306c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739222 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-etcd-client\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739262 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739321 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739353 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-client\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739394 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-serving-cert\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-config\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739455 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjtr\" (UniqueName: \"kubernetes.io/projected/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-kube-api-access-vxjtr\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739477 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4wsz\" (UniqueName: \"kubernetes.io/projected/c7479e59-0549-41c5-99f5-e56b81a9f9a5-kube-api-access-s4wsz\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739494 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739535 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-service-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739553 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-serving-cert\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-config\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739636 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfkb\" (UniqueName: \"kubernetes.io/projected/f5a5d052-742c-49af-9244-0ec212f993b6-kube-api-access-xwfkb\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739653 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739691 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739714 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739729 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739769 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739787 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-audit-dir\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739803 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739838 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bggn\" (UniqueName: \"kubernetes.io/projected/d4340fb1-7455-4140-9c75-2d075ea0306c-kube-api-access-4bggn\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739854 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739870 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-policies\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739887 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-node-pullsecrets\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739931 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-serving-cert\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739959 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740008 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-encryption-config\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740027 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-dir\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740043 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a5d052-742c-49af-9244-0ec212f993b6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740081 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740105 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qsb\" (UniqueName: \"kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740126 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740164 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2mr8\" (UniqueName: \"kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740186 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740278 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740347 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-image-import-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740364 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-encryption-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.739510 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740832 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741097 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740311 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.740691 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741356 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741204 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741534 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741669 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741776 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741848 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741852 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.741969 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.742029 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.742107 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.742195 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.742607 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.742824 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.743273 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.743413 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.743471 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.743610 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.743929 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.744064 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.744189 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.744344 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.744685 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.752875 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.753285 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.753405 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.753688 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.754282 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.754705 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.755604 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.755823 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.755840 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.756153 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.756235 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.756333 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.756548 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rlq9z"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.758079 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-64jbf"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.758731 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.759576 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.759826 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.759879 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.760443 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.760808 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.760851 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.761082 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.761138 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.769392 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.769927 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.770165 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.770317 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.778059 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7jvwd"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.779668 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.790773 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.791040 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.812371 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.812988 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.813223 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.813543 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.813601 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.817144 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.817408 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.817557 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.817673 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.817841 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.819554 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.819799 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.820041 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.820080 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.820108 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.820395 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.820511 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.821854 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.821865 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.822328 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.825264 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.826167 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.828470 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qv4fk"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.829024 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4lb6b"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.829541 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.829807 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.829954 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.832469 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.832967 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.836555 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.838466 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.838686 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9q4gk"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841438 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841480 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841505 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841523 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841541 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841583 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-auth-proxy-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841623 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c7479e59-0549-41c5-99f5-e56b81a9f9a5-machine-approver-tls\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841640 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knjb\" (UniqueName: \"kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841656 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6hb\" (UniqueName: \"kubernetes.io/projected/f4f37354-52b0-4164-a33c-65aa16618732-kube-api-access-hz6hb\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841679 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841696 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-audit\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841720 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-config\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841745 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841779 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841803 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841823 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841841 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841859 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841876 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841897 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841918 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-images\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841951 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4340fb1-7455-4140-9c75-2d075ea0306c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841969 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-etcd-client\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.841987 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842004 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-client\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842020 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-serving-cert\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842063 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-config\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842079 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjtr\" (UniqueName: \"kubernetes.io/projected/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-kube-api-access-vxjtr\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842102 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4wsz\" (UniqueName: \"kubernetes.io/projected/c7479e59-0549-41c5-99f5-e56b81a9f9a5-kube-api-access-s4wsz\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842120 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842137 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-service-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842155 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-serving-cert\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842183 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-config\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842202 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842233 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzsg\" (UniqueName: \"kubernetes.io/projected/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-kube-api-access-xwzsg\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842260 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842279 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfkb\" (UniqueName: \"kubernetes.io/projected/f5a5d052-742c-49af-9244-0ec212f993b6-kube-api-access-xwfkb\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842297 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842314 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842329 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842347 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842366 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-audit-dir\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842382 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842398 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bggn\" (UniqueName: \"kubernetes.io/projected/d4340fb1-7455-4140-9c75-2d075ea0306c-kube-api-access-4bggn\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842415 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842448 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-serving-cert\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842466 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842484 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842510 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-policies\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842534 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-node-pullsecrets\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842554 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842574 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsgh\" (UniqueName: \"kubernetes.io/projected/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-kube-api-access-rzsgh\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842594 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a5d052-742c-49af-9244-0ec212f993b6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842613 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842632 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-encryption-config\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842649 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-dir\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842668 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2mr8\" (UniqueName: \"kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842706 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842725 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6qsb\" (UniqueName: \"kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842742 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842777 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-image-import-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842796 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv668\" (UniqueName: \"kubernetes.io/projected/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-kube-api-access-bv668\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842814 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842841 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-encryption-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842858 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.842875 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8jh\" (UniqueName: \"kubernetes.io/projected/cc0df5cf-ade5-423e-9c9e-d389228c7246-kube-api-access-gt8jh\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.843477 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.844038 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.844527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-audit-dir\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.846841 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.847549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-images\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.850902 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-audit\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.851018 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.851698 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.854356 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-875xp"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.854398 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.854527 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.855674 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.856327 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-config\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.857341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.858765 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.859670 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.859876 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.875010 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c7479e59-0549-41c5-99f5-e56b81a9f9a5-machine-approver-tls\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.875534 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.876256 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.877270 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.878033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.878140 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.878409 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-etcd-client\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.878788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.878956 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-policies\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.879060 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4f37354-52b0-4164-a33c-65aa16618732-node-pullsecrets\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.879079 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.879470 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.879972 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-encryption-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.880014 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-serving-cert\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.880111 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.880652 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.881779 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.881861 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc0df5cf-ade5-423e-9c9e-d389228c7246-audit-dir\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.882263 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.882405 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.882897 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.882961 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-etcd-client\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.883708 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-service-ca-bundle\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.884699 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-encryption-config\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.885357 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-config\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.885505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4340fb1-7455-4140-9c75-2d075ea0306c-config\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.886558 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.886599 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.887106 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc0df5cf-ade5-423e-9c9e-d389228c7246-serving-cert\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.887115 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.882901 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-etcd-serving-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.887831 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.888798 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.889139 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.889324 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7479e59-0549-41c5-99f5-e56b81a9f9a5-auth-proxy-config\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.889659 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4f37354-52b0-4164-a33c-65aa16618732-image-import-ca\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.889832 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.890037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.890328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a5d052-742c-49af-9244-0ec212f993b6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.890360 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.890732 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4340fb1-7455-4140-9c75-2d075ea0306c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.894004 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.895752 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.898732 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.902379 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f37354-52b0-4164-a33c-65aa16618732-serving-cert\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.903304 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.903960 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.905114 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s5lwz"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.905922 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.909138 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djr57"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.909904 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.910048 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.913331 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.913528 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.914210 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.914314 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.921538 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.921612 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hkkvl"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.922313 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.922643 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.923218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.929120 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.929935 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.933752 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.935387 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.936065 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.936576 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.936991 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.937657 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-r5ppg"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.938241 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.938979 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.939373 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.939758 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.940550 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.941625 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.942067 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.942368 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.942799 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.943944 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dj8\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-kube-api-access-v7dj8\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944119 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944186 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944233 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzsg\" (UniqueName: \"kubernetes.io/projected/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-kube-api-access-xwzsg\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944262 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9gk\" (UniqueName: \"kubernetes.io/projected/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-kube-api-access-kz9gk\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944286 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e643acf0-b846-46cc-b067-dbfd708f50ee-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944339 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-metrics-tls\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944385 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944438 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944470 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-service-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944499 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzsgh\" (UniqueName: \"kubernetes.io/projected/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-kube-api-access-rzsgh\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944530 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944585 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv668\" (UniqueName: \"kubernetes.io/projected/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-kube-api-access-bv668\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944644 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-config\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944684 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944712 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e643acf0-b846-46cc-b067-dbfd708f50ee-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944743 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92vv\" (UniqueName: \"kubernetes.io/projected/67e6ada3-c08d-4b54-a678-379a25e72a15-kube-api-access-w92vv\") pod \"downloads-7954f5f757-qv4fk\" (UID: \"67e6ada3-c08d-4b54-a678-379a25e72a15\") " pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944776 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-serving-cert\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944830 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-config\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.944964 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rlq9z"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.945170 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.945617 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nm28\" (UniqueName: \"kubernetes.io/projected/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-kube-api-access-8nm28\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.945835 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.945992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946163 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-client\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946246 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946311 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-config\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946633 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946662 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.946689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.949355 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.949771 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-v8xhb"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.950370 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.950616 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.951053 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.952052 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.952651 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.953601 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.954064 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.955543 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.957118 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.958195 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.958400 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.959290 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.960306 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.961448 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.962778 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxhdb"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.963521 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hkkvl"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.966543 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qv4fk"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.968544 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7jvwd"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.969599 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djr57"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.970603 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.972354 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.973521 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s5lwz"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.974171 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.975163 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.976240 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4lb6b"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.977874 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.986014 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.988760 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.990489 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.993052 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r5ppg"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.993938 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.994440 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.995557 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-v8xhb"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.996685 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.997828 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-c4n8t"] Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.998608 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:55 crc kubenswrapper[4735]: I0131 15:00:55.999091 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8r8s6"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.000148 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8r8s6"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.000278 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.014214 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.033217 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nm28\" (UniqueName: \"kubernetes.io/projected/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-kube-api-access-8nm28\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047286 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-client\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047315 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047365 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dj8\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-kube-api-access-v7dj8\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047401 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047456 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz9gk\" (UniqueName: \"kubernetes.io/projected/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-kube-api-access-kz9gk\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047483 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e643acf0-b846-46cc-b067-dbfd708f50ee-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047554 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-metrics-tls\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-service-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047736 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-config\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047786 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e643acf0-b846-46cc-b067-dbfd708f50ee-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w92vv\" (UniqueName: \"kubernetes.io/projected/67e6ada3-c08d-4b54-a678-379a25e72a15-kube-api-access-w92vv\") pod \"downloads-7954f5f757-qv4fk\" (UID: \"67e6ada3-c08d-4b54-a678-379a25e72a15\") " pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.047850 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-serving-cert\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.048398 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-service-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.049019 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-ca\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.049050 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-config\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.049369 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e643acf0-b846-46cc-b067-dbfd708f50ee-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.052076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-serving-cert\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.053344 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-etcd-client\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.054783 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.073063 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.096922 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.113841 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.133625 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.154103 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.160668 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e643acf0-b846-46cc-b067-dbfd708f50ee-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.182045 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.193683 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.213691 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.234478 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.254204 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.274242 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.294858 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.313695 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.323472 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-metrics-tls\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.335230 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.389245 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8jh\" (UniqueName: \"kubernetes.io/projected/cc0df5cf-ade5-423e-9c9e-d389228c7246-kube-api-access-gt8jh\") pod \"apiserver-7bbb656c7d-2bslq\" (UID: \"cc0df5cf-ade5-423e-9c9e-d389228c7246\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.422413 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4wsz\" (UniqueName: \"kubernetes.io/projected/c7479e59-0549-41c5-99f5-e56b81a9f9a5-kube-api-access-s4wsz\") pod \"machine-approver-56656f9798-mqc92\" (UID: \"c7479e59-0549-41c5-99f5-e56b81a9f9a5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.435781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d11cfb-2100-4cd1-9f9c-ff0b18e9253f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r62v9\" (UID: \"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.450571 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.464299 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bggn\" (UniqueName: \"kubernetes.io/projected/d4340fb1-7455-4140-9c75-2d075ea0306c-kube-api-access-4bggn\") pod \"machine-api-operator-5694c8668f-9q4gk\" (UID: \"d4340fb1-7455-4140-9c75-2d075ea0306c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.492577 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knjb\" (UniqueName: \"kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb\") pod \"controller-manager-879f6c89f-5rl28\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.524916 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6hb\" (UniqueName: \"kubernetes.io/projected/f4f37354-52b0-4164-a33c-65aa16618732-kube-api-access-hz6hb\") pod \"apiserver-76f77b778f-xxhdb\" (UID: \"f4f37354-52b0-4164-a33c-65aa16618732\") " pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.535658 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.543049 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.545984 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2mr8\" (UniqueName: \"kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8\") pod \"route-controller-manager-6576b87f9c-7qzcz\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.554211 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.567089 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.580047 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.600260 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.629749 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.634315 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.634686 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.655204 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.659021 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjtr\" (UniqueName: \"kubernetes.io/projected/dc17040c-7aa7-40f0-8896-aae1d82c1d8d-kube-api-access-vxjtr\") pod \"authentication-operator-69f744f599-875xp\" (UID: \"dc17040c-7aa7-40f0-8896-aae1d82c1d8d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:56 crc kubenswrapper[4735]: W0131 15:00:56.665443 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7479e59_0549_41c5_99f5_e56b81a9f9a5.slice/crio-4a1f69e851b11a333cfeb624ff66f1d576127dd4338e33808378c588cb7e28dc WatchSource:0}: Error finding container 4a1f69e851b11a333cfeb624ff66f1d576127dd4338e33808378c588cb7e28dc: Status 404 returned error can't find the container with id 4a1f69e851b11a333cfeb624ff66f1d576127dd4338e33808378c588cb7e28dc Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.691044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6qsb\" (UniqueName: \"kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb\") pod \"oauth-openshift-558db77b4-qf4bc\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.709172 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfkb\" (UniqueName: \"kubernetes.io/projected/f5a5d052-742c-49af-9244-0ec212f993b6-kube-api-access-xwfkb\") pod \"cluster-samples-operator-665b6dd947-p8fzj\" (UID: \"f5a5d052-742c-49af-9244-0ec212f993b6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.715038 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.717258 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.734771 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.735284 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.754454 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.765515 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.773240 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: W0131 15:00:56.779854 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d11cfb_2100_4cd1_9f9c_ff0b18e9253f.slice/crio-14d2e8dfd284b1aa85ea58abedd6286f440a98f713c7098f77d76ed5b190b40e WatchSource:0}: Error finding container 14d2e8dfd284b1aa85ea58abedd6286f440a98f713c7098f77d76ed5b190b40e: Status 404 returned error can't find the container with id 14d2e8dfd284b1aa85ea58abedd6286f440a98f713c7098f77d76ed5b190b40e Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.787975 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.793865 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.814867 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.824611 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.833992 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.860631 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.868448 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.873761 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.874754 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.907715 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.911891 4735 request.go:700] Waited for 1.001419953s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.914509 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.934305 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.946656 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9q4gk"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.954573 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.959099 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.976275 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 15:00:56 crc kubenswrapper[4735]: W0131 15:00:56.979317 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4340fb1_7455_4140_9c75_2d075ea0306c.slice/crio-83d5ff3c5d96fd5fec56b592cdbac11a9daed72dac5e5060ae86859c5d26d3d2 WatchSource:0}: Error finding container 83d5ff3c5d96fd5fec56b592cdbac11a9daed72dac5e5060ae86859c5d26d3d2: Status 404 returned error can't find the container with id 83d5ff3c5d96fd5fec56b592cdbac11a9daed72dac5e5060ae86859c5d26d3d2 Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.993532 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj"] Jan 31 15:00:56 crc kubenswrapper[4735]: I0131 15:00:56.995484 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.014930 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.035919 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.051632 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.064047 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.080077 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: W0131 15:00:57.080697 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3207b01f_1e8b_40df_8f73_8a46dbc61847.slice/crio-1154169686e1ba72dd9f168f8821d67f47d64e556e8ee4ada52decb15d0d1430 WatchSource:0}: Error finding container 1154169686e1ba72dd9f168f8821d67f47d64e556e8ee4ada52decb15d0d1430: Status 404 returned error can't find the container with id 1154169686e1ba72dd9f168f8821d67f47d64e556e8ee4ada52decb15d0d1430 Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.094827 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.114219 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.134279 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.136876 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xxhdb"] Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.153948 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.173702 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.183075 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.203263 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.204960 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-875xp"] Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.220832 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.234534 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.253215 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.274580 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.294556 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.314007 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.333468 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.353643 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.379869 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.379898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" event={"ID":"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f","Type":"ContainerStarted","Data":"fe56190dc920d203e7070d78e75e3f0e2f1929b8663caccba0a70f0552fba3dd"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.379962 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" event={"ID":"69d11cfb-2100-4cd1-9f9c-ff0b18e9253f","Type":"ContainerStarted","Data":"14d2e8dfd284b1aa85ea58abedd6286f440a98f713c7098f77d76ed5b190b40e"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.384182 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" event={"ID":"dc17040c-7aa7-40f0-8896-aae1d82c1d8d","Type":"ContainerStarted","Data":"9fade803d66eeb0db208eb5596c2e0d7b1b0de215a85e891c42ea110de71debd"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.385665 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" event={"ID":"f4f37354-52b0-4164-a33c-65aa16618732","Type":"ContainerStarted","Data":"d274e5360714ed33254fcbca09b7453a13914572e7185d8e77dbac5844b97c07"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.387656 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" event={"ID":"3207b01f-1e8b-40df-8f73-8a46dbc61847","Type":"ContainerStarted","Data":"94eee674b8406b06605308427b2e6829c050f7a3c2a7265c327493dba5536299"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.387688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" event={"ID":"3207b01f-1e8b-40df-8f73-8a46dbc61847","Type":"ContainerStarted","Data":"1154169686e1ba72dd9f168f8821d67f47d64e556e8ee4ada52decb15d0d1430"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.387941 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.389672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" event={"ID":"c7479e59-0549-41c5-99f5-e56b81a9f9a5","Type":"ContainerStarted","Data":"f088683849b892a8fc1cbbdccd8bdf40a51cb2f4222face2e637ee343979a406"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.389708 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" event={"ID":"c7479e59-0549-41c5-99f5-e56b81a9f9a5","Type":"ContainerStarted","Data":"4a1f69e851b11a333cfeb624ff66f1d576127dd4338e33808378c588cb7e28dc"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.390037 4735 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qzcz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.390083 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.391769 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" event={"ID":"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c","Type":"ContainerStarted","Data":"ed8c4fdc8cc4e3f46a80f061e89d4cd4880f60082fbbb62445c3c027b7d0c538"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.391797 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" event={"ID":"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c","Type":"ContainerStarted","Data":"4b4bcbbdf9aa01b8114ec72ec2993e590588c5a85c2d7da1dfb934c41337c346"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.392444 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.393517 4735 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5rl28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.393545 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.394074 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" event={"ID":"f5a5d052-742c-49af-9244-0ec212f993b6","Type":"ContainerStarted","Data":"958d0aaf26ab9eb7b5ae544e095f785af17bc41dc84108db73e9b27c2ffb2694"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.394099 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" event={"ID":"f5a5d052-742c-49af-9244-0ec212f993b6","Type":"ContainerStarted","Data":"88fb1546d21b8d08c9fc4cbad3febec72fd77935a6307dcb62c6f4884d5edcad"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.394383 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.396819 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" event={"ID":"f5835267-03a0-4567-b113-84e6a885af15","Type":"ContainerStarted","Data":"691f2fa2e0fc8498d6225a229ce303faa53264df241fb8776043a1830a69622a"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.398803 4735 generic.go:334] "Generic (PLEG): container finished" podID="cc0df5cf-ade5-423e-9c9e-d389228c7246" containerID="eccd880c442a57cdf488bafdeef7589302cf9ef13de84128522707f014fdf973" exitCode=0 Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.399032 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" event={"ID":"cc0df5cf-ade5-423e-9c9e-d389228c7246","Type":"ContainerDied","Data":"eccd880c442a57cdf488bafdeef7589302cf9ef13de84128522707f014fdf973"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.399064 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" event={"ID":"cc0df5cf-ade5-423e-9c9e-d389228c7246","Type":"ContainerStarted","Data":"6af9b0a0e182218753e658280fc05fe5f9b6e0814cc752078dc7885f2e8f3a54"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.404510 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" event={"ID":"d4340fb1-7455-4140-9c75-2d075ea0306c","Type":"ContainerStarted","Data":"8bfbeb92b93dbff757e705b22cc91c5e070bc09a330d20e3e40f0e51ae98b549"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.404587 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" event={"ID":"d4340fb1-7455-4140-9c75-2d075ea0306c","Type":"ContainerStarted","Data":"239a57ba48a0762975ff1a58d30c4062001444833798fefe6d3edfb53c18643e"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.404605 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" event={"ID":"d4340fb1-7455-4140-9c75-2d075ea0306c","Type":"ContainerStarted","Data":"83d5ff3c5d96fd5fec56b592cdbac11a9daed72dac5e5060ae86859c5d26d3d2"} Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.414288 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.433321 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.454582 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.474464 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.493585 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.514228 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.534787 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.556008 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.573745 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.594351 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.614521 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.633852 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.654924 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.700057 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kkkpm\" (UID: \"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.708841 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzsg\" (UniqueName: \"kubernetes.io/projected/c660ea67-0ecc-40af-8dd7-a4a50d350ee3-kube-api-access-xwzsg\") pod \"openshift-apiserver-operator-796bbdcf4f-nfk5g\" (UID: \"c660ea67-0ecc-40af-8dd7-a4a50d350ee3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.731829 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzsgh\" (UniqueName: \"kubernetes.io/projected/f0a6bac8-059b-4a0a-8aa3-589a921d20a9-kube-api-access-rzsgh\") pod \"openshift-config-operator-7777fb866f-fhmhw\" (UID: \"f0a6bac8-059b-4a0a-8aa3-589a921d20a9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.754348 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.754580 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.756308 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv668\" (UniqueName: \"kubernetes.io/projected/c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0-kube-api-access-bv668\") pod \"openshift-controller-manager-operator-756b6f6bc6-v95j9\" (UID: \"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.771442 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.773895 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.785055 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.795075 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.815404 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.854740 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.855013 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.890177 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.894331 4735 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.911989 4735 request.go:700] Waited for 1.911436751s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.913886 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.933588 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.970884 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nm28\" (UniqueName: \"kubernetes.io/projected/5b6743de-3bcf-49a3-b8ec-a1dbaf72293c-kube-api-access-8nm28\") pod \"etcd-operator-b45778765-7jvwd\" (UID: \"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:57 crc kubenswrapper[4735]: I0131 15:00:57.991109 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dj8\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-kube-api-access-v7dj8\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.007591 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e643acf0-b846-46cc-b067-dbfd708f50ee-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7b2f\" (UID: \"e643acf0-b846-46cc-b067-dbfd708f50ee\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.029092 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.031243 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz9gk\" (UniqueName: \"kubernetes.io/projected/ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0-kube-api-access-kz9gk\") pod \"dns-operator-744455d44c-4lb6b\" (UID: \"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0\") " pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:58 crc kubenswrapper[4735]: W0131 15:00:58.036587 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56a3d8b0_f1b6_4f3b_8ea2_5dd1c1091708.slice/crio-f16e8dd6c756ece045f322fadcb1ff407bdcc95c002b003dd6fea2e14a337d92 WatchSource:0}: Error finding container f16e8dd6c756ece045f322fadcb1ff407bdcc95c002b003dd6fea2e14a337d92: Status 404 returned error can't find the container with id f16e8dd6c756ece045f322fadcb1ff407bdcc95c002b003dd6fea2e14a337d92 Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.039576 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.044006 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.053458 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w92vv\" (UniqueName: \"kubernetes.io/projected/67e6ada3-c08d-4b54-a678-379a25e72a15-kube-api-access-w92vv\") pod \"downloads-7954f5f757-qv4fk\" (UID: \"67e6ada3-c08d-4b54-a678-379a25e72a15\") " pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:58 crc kubenswrapper[4735]: W0131 15:00:58.061871 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0a6bac8_059b_4a0a_8aa3_589a921d20a9.slice/crio-7aa71011cde9f8d2075d2dec2d0a346565287075c87b8852dab1153202f1db38 WatchSource:0}: Error finding container 7aa71011cde9f8d2075d2dec2d0a346565287075c87b8852dab1153202f1db38: Status 404 returned error can't find the container with id 7aa71011cde9f8d2075d2dec2d0a346565287075c87b8852dab1153202f1db38 Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086309 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086378 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzbw\" (UniqueName: \"kubernetes.io/projected/0ae31b9d-dde8-465b-9b2e-e81832178125-kube-api-access-skzbw\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086398 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-metrics-certs\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086434 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086456 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086681 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086771 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-trusted-ca\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086799 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086816 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae31b9d-dde8-465b-9b2e-e81832178125-serving-cert\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.086878 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.586861687 +0000 UTC m=+144.360190729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086953 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhsk\" (UniqueName: \"kubernetes.io/projected/805c24cb-0eea-458e-ae49-38eb501feadc-kube-api-access-4lhsk\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.086993 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-config\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087132 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087194 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-stats-auth\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087250 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c82w\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087273 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-default-certificate\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087303 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087323 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6t74\" (UniqueName: \"kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087458 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/805c24cb-0eea-458e-ae49-38eb501feadc-service-ca-bundle\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.087526 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.090622 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.090688 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.090708 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.091284 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.121390 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.136159 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.142769 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.154533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191304 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191574 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c82w\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191605 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-default-certificate\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191647 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nmf\" (UniqueName: \"kubernetes.io/projected/00174b9a-5905-4611-943b-3652273b31b5-kube-api-access-59nmf\") pod \"migrator-59844c95c7-brzfv\" (UID: \"00174b9a-5905-4611-943b-3652273b31b5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff4c4eb9-3ad4-47b0-a47a-974b41396828-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191728 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6t74\" (UniqueName: \"kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191746 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-registration-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191764 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/805c24cb-0eea-458e-ae49-38eb501feadc-service-ca-bundle\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191796 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8c8\" (UniqueName: \"kubernetes.io/projected/52d5f4fc-bb86-426a-b56e-810e4ffc1315-kube-api-access-sk8c8\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd3f921e-1695-4e8e-acd1-eff2a5981dac-tmpfs\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191830 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191882 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.191979 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192014 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192034 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7640c224-4041-45b4-9755-e8b091d0f7c9-proxy-tls\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192074 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6w9\" (UniqueName: \"kubernetes.io/projected/8935f1ed-7ef3-4719-865c-aab9b67e75da-kube-api-access-ds6w9\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192104 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skzbw\" (UniqueName: \"kubernetes.io/projected/0ae31b9d-dde8-465b-9b2e-e81832178125-kube-api-access-skzbw\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192122 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sqn\" (UniqueName: \"kubernetes.io/projected/49baed04-a186-4573-b913-9e00661a18a3-kube-api-access-f4sqn\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192189 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f8aafc5-01be-4e71-a217-6986de9a8f08-trusted-ca\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192222 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192243 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pdk\" (UniqueName: \"kubernetes.io/projected/7640c224-4041-45b4-9755-e8b091d0f7c9-kube-api-access-m8pdk\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-webhook-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192318 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2037333-026d-4944-8810-18e892c44792-proxy-tls\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192335 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7640c224-4041-45b4-9755-e8b091d0f7c9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192351 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-images\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.192383 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.692349448 +0000 UTC m=+144.465678490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.192501 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193358 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kld4s\" (UniqueName: \"kubernetes.io/projected/93789125-4788-45cd-bd8d-be348946b798-kube-api-access-kld4s\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193393 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfd7\" (UniqueName: \"kubernetes.io/projected/dc00da9a-bbf1-44ac-b70a-a04198031e2b-kube-api-access-4tfd7\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193590 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-trusted-ca\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193629 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae31b9d-dde8-465b-9b2e-e81832178125-serving-cert\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193649 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqvp\" (UniqueName: \"kubernetes.io/projected/977c9314-7448-48b4-acd7-583385fd7138-kube-api-access-qbqvp\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhsk\" (UniqueName: \"kubernetes.io/projected/805c24cb-0eea-458e-ae49-38eb501feadc-kube-api-access-4lhsk\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193861 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwldl\" (UniqueName: \"kubernetes.io/projected/da2f588d-a855-4df4-b3ab-fba7111c974c-kube-api-access-kwldl\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193933 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jknq6\" (UniqueName: \"kubernetes.io/projected/9c7cc886-4abd-48f6-8256-67b5011f9cb5-kube-api-access-jknq6\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.193971 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-config\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194010 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8935f1ed-7ef3-4719-865c-aab9b67e75da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194049 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-srv-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194079 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7l5l\" (UniqueName: \"kubernetes.io/projected/70d59cce-34b9-480a-82df-5c6303374dd8-kube-api-access-n7l5l\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194099 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194143 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsx8\" (UniqueName: \"kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194162 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194209 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.194898 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/805c24cb-0eea-458e-ae49-38eb501feadc-service-ca-bundle\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195618 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkjd\" (UniqueName: \"kubernetes.io/projected/ff4c4eb9-3ad4-47b0-a47a-974b41396828-kube-api-access-2qkjd\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195713 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-certs\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195757 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbww\" (UniqueName: \"kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195794 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195815 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-metrics-certs\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195834 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjnzz\" (UniqueName: \"kubernetes.io/projected/e2037333-026d-4944-8810-18e892c44792-kube-api-access-hjnzz\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195870 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195892 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrk2p\" (UniqueName: \"kubernetes.io/projected/12afefa5-e59c-482f-97fc-00499ee3a1c2-kube-api-access-zrk2p\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195914 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-socket-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195968 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52d5f4fc-bb86-426a-b56e-810e4ffc1315-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.195998 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4c4eb9-3ad4-47b0-a47a-974b41396828-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2f588d-a855-4df4-b3ab-fba7111c974c-cert\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196042 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/977c9314-7448-48b4-acd7-583385fd7138-signing-cabundle\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196182 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-config\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196202 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-srv-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196225 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93789125-4788-45cd-bd8d-be348946b798-config\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196247 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-mountpoint-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196267 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-csi-data-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196292 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196314 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzm9\" (UniqueName: \"kubernetes.io/projected/dd3f921e-1695-4e8e-acd1-eff2a5981dac-kube-api-access-wzzm9\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196338 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/977c9314-7448-48b4-acd7-583385fd7138-signing-key\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196379 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f8aafc5-01be-4e71-a217-6986de9a8f08-metrics-tls\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196439 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdh9\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-kube-api-access-5rdh9\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196478 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d06f51c-fb29-4011-9168-ff8321e05dd9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196540 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-plugins-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196579 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93789125-4788-45cd-bd8d-be348946b798-serving-cert\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2hv\" (UniqueName: \"kubernetes.io/projected/5d06f51c-fb29-4011-9168-ff8321e05dd9-kube-api-access-vk2hv\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196630 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-node-bootstrap-token\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196660 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196678 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12afefa5-e59c-482f-97fc-00499ee3a1c2-config-volume\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196723 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196752 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12afefa5-e59c-482f-97fc-00499ee3a1c2-metrics-tls\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196805 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-stats-auth\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.196822 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.197288 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.69727058 +0000 UTC m=+144.470599622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.198644 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.200489 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-trusted-ca\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.201653 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.203140 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae31b9d-dde8-465b-9b2e-e81832178125-config\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.203393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.203466 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae31b9d-dde8-465b-9b2e-e81832178125-serving-cert\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.206017 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.206451 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.206880 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.207010 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.208082 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-default-certificate\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.209441 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.210279 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-stats-auth\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.210340 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.210825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.212816 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.215979 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/805c24cb-0eea-458e-ae49-38eb501feadc-metrics-certs\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.238148 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6t74\" (UniqueName: \"kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74\") pod \"console-f9d7485db-kkjjj\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.253518 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c82w\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.278315 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skzbw\" (UniqueName: \"kubernetes.io/projected/0ae31b9d-dde8-465b-9b2e-e81832178125-kube-api-access-skzbw\") pod \"console-operator-58897d9998-rlq9z\" (UID: \"0ae31b9d-dde8-465b-9b2e-e81832178125\") " pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.298093 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.298682 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk8c8\" (UniqueName: \"kubernetes.io/projected/52d5f4fc-bb86-426a-b56e-810e4ffc1315-kube-api-access-sk8c8\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.299955 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.799139627 +0000 UTC m=+144.572468669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310010 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd3f921e-1695-4e8e-acd1-eff2a5981dac-tmpfs\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310469 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7640c224-4041-45b4-9755-e8b091d0f7c9-proxy-tls\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6w9\" (UniqueName: \"kubernetes.io/projected/8935f1ed-7ef3-4719-865c-aab9b67e75da-kube-api-access-ds6w9\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310519 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sqn\" (UniqueName: \"kubernetes.io/projected/49baed04-a186-4573-b913-9e00661a18a3-kube-api-access-f4sqn\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310539 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f8aafc5-01be-4e71-a217-6986de9a8f08-trusted-ca\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310563 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-webhook-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310585 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8pdk\" (UniqueName: \"kubernetes.io/projected/7640c224-4041-45b4-9755-e8b091d0f7c9-kube-api-access-m8pdk\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310630 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2037333-026d-4944-8810-18e892c44792-proxy-tls\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310648 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7640c224-4041-45b4-9755-e8b091d0f7c9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310738 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kld4s\" (UniqueName: \"kubernetes.io/projected/93789125-4788-45cd-bd8d-be348946b798-kube-api-access-kld4s\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310762 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfd7\" (UniqueName: \"kubernetes.io/projected/dc00da9a-bbf1-44ac-b70a-a04198031e2b-kube-api-access-4tfd7\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310784 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-images\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310881 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqvp\" (UniqueName: \"kubernetes.io/projected/977c9314-7448-48b4-acd7-583385fd7138-kube-api-access-qbqvp\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310932 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwldl\" (UniqueName: \"kubernetes.io/projected/da2f588d-a855-4df4-b3ab-fba7111c974c-kube-api-access-kwldl\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310953 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jknq6\" (UniqueName: \"kubernetes.io/projected/9c7cc886-4abd-48f6-8256-67b5011f9cb5-kube-api-access-jknq6\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310981 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310999 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8935f1ed-7ef3-4719-865c-aab9b67e75da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311016 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-srv-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311034 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7l5l\" (UniqueName: \"kubernetes.io/projected/70d59cce-34b9-480a-82df-5c6303374dd8-kube-api-access-n7l5l\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311057 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311077 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311094 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsx8\" (UniqueName: \"kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311116 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkjd\" (UniqueName: \"kubernetes.io/projected/ff4c4eb9-3ad4-47b0-a47a-974b41396828-kube-api-access-2qkjd\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-certs\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311160 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311181 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbww\" (UniqueName: \"kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311201 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311224 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjnzz\" (UniqueName: \"kubernetes.io/projected/e2037333-026d-4944-8810-18e892c44792-kube-api-access-hjnzz\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrk2p\" (UniqueName: \"kubernetes.io/projected/12afefa5-e59c-482f-97fc-00499ee3a1c2-kube-api-access-zrk2p\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311267 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-socket-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311294 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52d5f4fc-bb86-426a-b56e-810e4ffc1315-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311316 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4c4eb9-3ad4-47b0-a47a-974b41396828-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311337 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2f588d-a855-4df4-b3ab-fba7111c974c-cert\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311354 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/977c9314-7448-48b4-acd7-583385fd7138-signing-cabundle\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311372 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-config\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311389 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-srv-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311435 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93789125-4788-45cd-bd8d-be348946b798-config\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311455 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzm9\" (UniqueName: \"kubernetes.io/projected/dd3f921e-1695-4e8e-acd1-eff2a5981dac-kube-api-access-wzzm9\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/977c9314-7448-48b4-acd7-583385fd7138-signing-key\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311485 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-mountpoint-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311503 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-csi-data-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311525 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f8aafc5-01be-4e71-a217-6986de9a8f08-metrics-tls\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311548 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdh9\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-kube-api-access-5rdh9\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311576 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d06f51c-fb29-4011-9168-ff8321e05dd9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311598 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-plugins-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311617 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-node-bootstrap-token\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311634 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93789125-4788-45cd-bd8d-be348946b798-serving-cert\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311649 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk2hv\" (UniqueName: \"kubernetes.io/projected/5d06f51c-fb29-4011-9168-ff8321e05dd9-kube-api-access-vk2hv\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311676 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12afefa5-e59c-482f-97fc-00499ee3a1c2-config-volume\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311694 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311713 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12afefa5-e59c-482f-97fc-00499ee3a1c2-metrics-tls\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311764 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59nmf\" (UniqueName: \"kubernetes.io/projected/00174b9a-5905-4611-943b-3652273b31b5-kube-api-access-59nmf\") pod \"migrator-59844c95c7-brzfv\" (UID: \"00174b9a-5905-4611-943b-3652273b31b5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311784 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff4c4eb9-3ad4-47b0-a47a-974b41396828-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.311803 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-registration-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.312062 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-registration-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.310529 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd3f921e-1695-4e8e-acd1-eff2a5981dac-tmpfs\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.314178 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f8aafc5-01be-4e71-a217-6986de9a8f08-trusted-ca\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.315842 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-images\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.316584 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-webhook-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.316675 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-socket-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.316762 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2037333-026d-4944-8810-18e892c44792-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.316850 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.317633 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.317712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7640c224-4041-45b4-9755-e8b091d0f7c9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.317741 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.317825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4c4eb9-3ad4-47b0-a47a-974b41396828-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.317867 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12afefa5-e59c-482f-97fc-00499ee3a1c2-config-volume\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.317961 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.817947859 +0000 UTC m=+144.591276901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.319447 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/977c9314-7448-48b4-acd7-583385fd7138-signing-cabundle\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.319935 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93789125-4788-45cd-bd8d-be348946b798-config\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.322299 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.323769 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-config\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.324195 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-plugins-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.324141 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-mountpoint-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.324326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/49baed04-a186-4573-b913-9e00661a18a3-csi-data-dir\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.326608 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-srv-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.328600 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.331612 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2037333-026d-4944-8810-18e892c44792-proxy-tls\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.334463 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8935f1ed-7ef3-4719-865c-aab9b67e75da-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.335144 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd3f921e-1695-4e8e-acd1-eff2a5981dac-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.337910 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7640c224-4041-45b4-9755-e8b091d0f7c9-proxy-tls\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.338117 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-node-bootstrap-token\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.338888 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhsk\" (UniqueName: \"kubernetes.io/projected/805c24cb-0eea-458e-ae49-38eb501feadc-kube-api-access-4lhsk\") pod \"router-default-5444994796-64jbf\" (UID: \"805c24cb-0eea-458e-ae49-38eb501feadc\") " pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.339569 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/977c9314-7448-48b4-acd7-583385fd7138-signing-key\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.339390 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12afefa5-e59c-482f-97fc-00499ee3a1c2-metrics-tls\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.340351 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.347742 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2f588d-a855-4df4-b3ab-fba7111c974c-cert\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.348738 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93789125-4788-45cd-bd8d-be348946b798-serving-cert\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.349793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.349799 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f8aafc5-01be-4e71-a217-6986de9a8f08-metrics-tls\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.349916 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff4c4eb9-3ad4-47b0-a47a-974b41396828-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.353248 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c7cc886-4abd-48f6-8256-67b5011f9cb5-certs\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.353540 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk8c8\" (UniqueName: \"kubernetes.io/projected/52d5f4fc-bb86-426a-b56e-810e4ffc1315-kube-api-access-sk8c8\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.357148 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.357707 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/70d59cce-34b9-480a-82df-5c6303374dd8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.358067 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dc00da9a-bbf1-44ac-b70a-a04198031e2b-srv-cert\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.362174 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52d5f4fc-bb86-426a-b56e-810e4ffc1315-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nf8zl\" (UID: \"52d5f4fc-bb86-426a-b56e-810e4ffc1315\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.363123 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d06f51c-fb29-4011-9168-ff8321e05dd9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.372309 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6w9\" (UniqueName: \"kubernetes.io/projected/8935f1ed-7ef3-4719-865c-aab9b67e75da-kube-api-access-ds6w9\") pod \"multus-admission-controller-857f4d67dd-s5lwz\" (UID: \"8935f1ed-7ef3-4719-865c-aab9b67e75da\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.389924 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sqn\" (UniqueName: \"kubernetes.io/projected/49baed04-a186-4573-b913-9e00661a18a3-kube-api-access-f4sqn\") pod \"csi-hostpathplugin-8r8s6\" (UID: \"49baed04-a186-4573-b913-9e00661a18a3\") " pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.398559 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.411106 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.413938 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.414554 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:58.914532784 +0000 UTC m=+144.687861826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.421234 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwldl\" (UniqueName: \"kubernetes.io/projected/da2f588d-a855-4df4-b3ab-fba7111c974c-kube-api-access-kwldl\") pod \"ingress-canary-v8xhb\" (UID: \"da2f588d-a855-4df4-b3ab-fba7111c974c\") " pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.434813 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.454993 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kld4s\" (UniqueName: \"kubernetes.io/projected/93789125-4788-45cd-bd8d-be348946b798-kube-api-access-kld4s\") pod \"service-ca-operator-777779d784-f5mk5\" (UID: \"93789125-4788-45cd-bd8d-be348946b798\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.458784 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" event={"ID":"dc17040c-7aa7-40f0-8896-aae1d82c1d8d","Type":"ContainerStarted","Data":"6a3f4f4e8dfc3831e4f5ec74a528c4679420a3d108a1b1c1acf0d3136d1539e8"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.461221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfd7\" (UniqueName: \"kubernetes.io/projected/dc00da9a-bbf1-44ac-b70a-a04198031e2b-kube-api-access-4tfd7\") pod \"olm-operator-6b444d44fb-8fshf\" (UID: \"dc00da9a-bbf1-44ac-b70a-a04198031e2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.462281 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" event={"ID":"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708","Type":"ContainerStarted","Data":"f16e8dd6c756ece045f322fadcb1ff407bdcc95c002b003dd6fea2e14a337d92"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.474531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" event={"ID":"f5835267-03a0-4567-b113-84e6a885af15","Type":"ContainerStarted","Data":"92b0fb1592b3bda9d38766e268f9ea97cab522bbe006107c57a88626adc47ac5"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.475150 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.476667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" event={"ID":"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0","Type":"ContainerStarted","Data":"ac103ae291f06ba3dc34c430ee1681d8a65b01ccbf5438f58853f9a44794f47c"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.478404 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.486143 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.492859 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.504118 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" event={"ID":"c7479e59-0549-41c5-99f5-e56b81a9f9a5","Type":"ContainerStarted","Data":"7a2c09ef3d25ccecf1cd69a152deef206974746b486cb9694e9a34159c2e4bec"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.505122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jknq6\" (UniqueName: \"kubernetes.io/projected/9c7cc886-4abd-48f6-8256-67b5011f9cb5-kube-api-access-jknq6\") pod \"machine-config-server-c4n8t\" (UID: \"9c7cc886-4abd-48f6-8256-67b5011f9cb5\") " pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.509363 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" event={"ID":"c660ea67-0ecc-40af-8dd7-a4a50d350ee3","Type":"ContainerStarted","Data":"e735f5077a2e17a25de373eab01b2d4a8390297e663fb2e209f27b197c907224"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.510075 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjnzz\" (UniqueName: \"kubernetes.io/projected/e2037333-026d-4944-8810-18e892c44792-kube-api-access-hjnzz\") pod \"machine-config-operator-74547568cd-qsdp7\" (UID: \"e2037333-026d-4944-8810-18e892c44792\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.510834 4735 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qf4bc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.510878 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" podUID="f5835267-03a0-4567-b113-84e6a885af15" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.514987 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" event={"ID":"f5a5d052-742c-49af-9244-0ec212f993b6","Type":"ContainerStarted","Data":"6a4ae10e2cd1bc2ac0c1a6d32c54eb7a6d8fcf2d60bd51b870787738a4c19d62"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.520402 4735 generic.go:334] "Generic (PLEG): container finished" podID="f0a6bac8-059b-4a0a-8aa3-589a921d20a9" containerID="c61add2c6c38bac701def6a257b28bb85ab7809f77c72d082d9da0904678ccb4" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.520478 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" event={"ID":"f0a6bac8-059b-4a0a-8aa3-589a921d20a9","Type":"ContainerDied","Data":"c61add2c6c38bac701def6a257b28bb85ab7809f77c72d082d9da0904678ccb4"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.520504 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" event={"ID":"f0a6bac8-059b-4a0a-8aa3-589a921d20a9","Type":"ContainerStarted","Data":"7aa71011cde9f8d2075d2dec2d0a346565287075c87b8852dab1153202f1db38"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.529538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.533511 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.033486814 +0000 UTC m=+144.806815856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.542713 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.550877 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.552734 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrk2p\" (UniqueName: \"kubernetes.io/projected/12afefa5-e59c-482f-97fc-00499ee3a1c2-kube-api-access-zrk2p\") pod \"dns-default-r5ppg\" (UID: \"12afefa5-e59c-482f-97fc-00499ee3a1c2\") " pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.552751 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4f37354-52b0-4164-a33c-65aa16618732" containerID="7daeab89f713258be50574856761ec9e4117aef191d32f422b6bace55348643a" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.552817 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" event={"ID":"f4f37354-52b0-4164-a33c-65aa16618732","Type":"ContainerDied","Data":"7daeab89f713258be50574856761ec9e4117aef191d32f422b6bace55348643a"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.558562 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-r5ppg" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.574320 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qv4fk"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.580086 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" event={"ID":"cc0df5cf-ade5-423e-9c9e-d389228c7246","Type":"ContainerStarted","Data":"3e524289f5f214d49991841b644d5d2dd6720c4e63a8817b10cf70acab3ed064"} Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.585486 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wv8jn\" (UID: \"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.597481 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.597878 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-v8xhb" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.607279 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c4n8t" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.616305 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsx8\" (UniqueName: \"kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8\") pod \"collect-profiles-29497860-vprcf\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.621130 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkjd\" (UniqueName: \"kubernetes.io/projected/ff4c4eb9-3ad4-47b0-a47a-974b41396828-kube-api-access-2qkjd\") pod \"kube-storage-version-migrator-operator-b67b599dd-5nc6f\" (UID: \"ff4c4eb9-3ad4-47b0-a47a-974b41396828\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.629407 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdh9\" (UniqueName: \"kubernetes.io/projected/1f8aafc5-01be-4e71-a217-6986de9a8f08-kube-api-access-5rdh9\") pod \"ingress-operator-5b745b69d9-4sz7n\" (UID: \"1f8aafc5-01be-4e71-a217-6986de9a8f08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.634039 4735 csr.go:261] certificate signing request csr-4ls4j is approved, waiting to be issued Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.638923 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.641918 4735 csr.go:257] certificate signing request csr-4ls4j is issued Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.641944 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7l5l\" (UniqueName: \"kubernetes.io/projected/70d59cce-34b9-480a-82df-5c6303374dd8-kube-api-access-n7l5l\") pod \"catalog-operator-68c6474976-dwl27\" (UID: \"70d59cce-34b9-480a-82df-5c6303374dd8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.647668 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.648300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.648881 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.148864901 +0000 UTC m=+144.922193943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.649784 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7jvwd"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.650347 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.652188 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.152177076 +0000 UTC m=+144.925506118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: W0131 15:00:58.655769 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e6ada3_c08d_4b54_a678_379a25e72a15.slice/crio-64a29e3d8d5a0f662a305b10df4e0d1c2463f65c0f14438fc8a63685e9a00ff1 WatchSource:0}: Error finding container 64a29e3d8d5a0f662a305b10df4e0d1c2463f65c0f14438fc8a63685e9a00ff1: Status 404 returned error can't find the container with id 64a29e3d8d5a0f662a305b10df4e0d1c2463f65c0f14438fc8a63685e9a00ff1 Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.686994 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.687616 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqvp\" (UniqueName: \"kubernetes.io/projected/977c9314-7448-48b4-acd7-583385fd7138-kube-api-access-qbqvp\") pod \"service-ca-9c57cc56f-hkkvl\" (UID: \"977c9314-7448-48b4-acd7-583385fd7138\") " pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.693326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nmf\" (UniqueName: \"kubernetes.io/projected/00174b9a-5905-4611-943b-3652273b31b5-kube-api-access-59nmf\") pod \"migrator-59844c95c7-brzfv\" (UID: \"00174b9a-5905-4611-943b-3652273b31b5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.713464 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzm9\" (UniqueName: \"kubernetes.io/projected/dd3f921e-1695-4e8e-acd1-eff2a5981dac-kube-api-access-wzzm9\") pod \"packageserver-d55dfcdfc-p8j49\" (UID: \"dd3f921e-1695-4e8e-acd1-eff2a5981dac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.753330 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.755280 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.255247188 +0000 UTC m=+145.028576230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.755371 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8pdk\" (UniqueName: \"kubernetes.io/projected/7640c224-4041-45b4-9755-e8b091d0f7c9-kube-api-access-m8pdk\") pod \"machine-config-controller-84d6567774-tzg7z\" (UID: \"7640c224-4041-45b4-9755-e8b091d0f7c9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.759727 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.762386 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbww\" (UniqueName: \"kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww\") pod \"marketplace-operator-79b997595-djr57\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.767878 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.782496 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.799833 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk2hv\" (UniqueName: \"kubernetes.io/projected/5d06f51c-fb29-4011-9168-ff8321e05dd9-kube-api-access-vk2hv\") pod \"package-server-manager-789f6589d5-nnxc7\" (UID: \"5d06f51c-fb29-4011-9168-ff8321e05dd9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.800172 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.812051 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.818946 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.827470 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.834329 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.850593 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.856840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.857416 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.357401983 +0000 UTC m=+145.130731025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.871163 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.885276 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.889712 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.927998 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4lb6b"] Jan 31 15:00:58 crc kubenswrapper[4735]: W0131 15:00:58.945002 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e5eeff_bbc9_4c8d_9b4d_b4e3cda36d07.slice/crio-35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b WatchSource:0}: Error finding container 35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b: Status 404 returned error can't find the container with id 35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.958384 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:58 crc kubenswrapper[4735]: E0131 15:00:58.959026 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.458985922 +0000 UTC m=+145.232314964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:58 crc kubenswrapper[4735]: I0131 15:00:58.980353 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.035016 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rlq9z"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.062340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.062843 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.562825056 +0000 UTC m=+145.336154098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.163226 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.164890 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.664858968 +0000 UTC m=+145.438188010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.165401 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.165925 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.665908328 +0000 UTC m=+145.439237370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.270695 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.271710 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.771694738 +0000 UTC m=+145.545023780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.310526 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.372705 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.373325 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.873308788 +0000 UTC m=+145.646637830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.410339 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p8fzj" podStartSLOduration=122.410299895 podStartE2EDuration="2m2.410299895s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:59.408886034 +0000 UTC m=+145.182215076" watchObservedRunningTime="2026-01-31 15:00:59.410299895 +0000 UTC m=+145.183628937" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.445410 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s5lwz"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.472100 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.478307 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.479326 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:00:59.979306354 +0000 UTC m=+145.752635396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.529967 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8r8s6"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.580809 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.581260 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.081246844 +0000 UTC m=+145.854575886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.597566 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9q4gk" podStartSLOduration=121.597544344 podStartE2EDuration="2m1.597544344s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:59.560367572 +0000 UTC m=+145.333696604" watchObservedRunningTime="2026-01-31 15:00:59.597544344 +0000 UTC m=+145.370873386" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.604790 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.649061 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 14:55:58 +0000 UTC, rotation deadline is 2026-11-09 20:13:03.130057751 +0000 UTC Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.649099 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6773h12m3.480961611s for next certificate rotation Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.656204 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-v8xhb"] Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.659283 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" event={"ID":"0ae31b9d-dde8-465b-9b2e-e81832178125","Type":"ContainerStarted","Data":"2e3c0750c83dbcb09d5871bf631555650deb694acdf6f7b8c0569e16c3f81a10"} Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.683953 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.183586404 +0000 UTC m=+145.956915456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.682443 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.684805 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.687626 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.18759926 +0000 UTC m=+145.960928312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.723732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" event={"ID":"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c","Type":"ContainerStarted","Data":"e4dc43c7691a85a96470564b2cf4643b57508941e1fea1beb82326689f2c64f8"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.753382 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c4n8t" event={"ID":"9c7cc886-4abd-48f6-8256-67b5011f9cb5","Type":"ContainerStarted","Data":"089654ce3a4dba74e6d78e393b45ceefe7a1a712df0ea5bf27777d1d5fdecc63"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.777176 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" event={"ID":"56a3d8b0-f1b6-4f3b-8ea2-5dd1c1091708","Type":"ContainerStarted","Data":"e70550d19018a92e7feb3896ece8c9afb060b29f6fa28508aaba84d34a73bbac"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.786064 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.786556 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.286540653 +0000 UTC m=+146.059869695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.835839 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" podStartSLOduration=121.83571887 podStartE2EDuration="2m1.83571887s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:59.832354303 +0000 UTC m=+145.605683355" watchObservedRunningTime="2026-01-31 15:00:59.83571887 +0000 UTC m=+145.609047932" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.837616 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" event={"ID":"dc00da9a-bbf1-44ac-b70a-a04198031e2b","Type":"ContainerStarted","Data":"3fe26bd2ed28a7cde4f364872d5ea3e6a320a0ad82bf4ed949b3675fd6d29ae9"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.851358 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qv4fk" event={"ID":"67e6ada3-c08d-4b54-a678-379a25e72a15","Type":"ContainerStarted","Data":"5ce779fc24b2723c22e624e610b8876b668e5acb77fdd5f3a01c0145275b5807"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.851459 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qv4fk" event={"ID":"67e6ada3-c08d-4b54-a678-379a25e72a15","Type":"ContainerStarted","Data":"64a29e3d8d5a0f662a305b10df4e0d1c2463f65c0f14438fc8a63685e9a00ff1"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.852972 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.854375 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" podStartSLOduration=121.854363118 podStartE2EDuration="2m1.854363118s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:59.853381579 +0000 UTC m=+145.626710611" watchObservedRunningTime="2026-01-31 15:00:59.854363118 +0000 UTC m=+145.627692160" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.855757 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-qv4fk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.855830 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qv4fk" podUID="67e6ada3-c08d-4b54-a678-379a25e72a15" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.874364 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" event={"ID":"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0","Type":"ContainerStarted","Data":"34265d638ef035080969537804f415f1252aafc160f1c7132d7ec815e7157ec1"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.882828 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-875xp" podStartSLOduration=122.882806378 podStartE2EDuration="2m2.882806378s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:59.881251943 +0000 UTC m=+145.654580985" watchObservedRunningTime="2026-01-31 15:00:59.882806378 +0000 UTC m=+145.656135410" Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.887327 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:00:59 crc kubenswrapper[4735]: E0131 15:00:59.888201 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.388146681 +0000 UTC m=+146.161475723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.893396 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" event={"ID":"c0128f15-4d4e-4b4a-ae88-5c10bcfc12b0","Type":"ContainerStarted","Data":"aa76136c60b261620ef6de3c56d94c96e93b3b555e64ffba4b0a9e0db09e3f2a"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.972097 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-64jbf" event={"ID":"805c24cb-0eea-458e-ae49-38eb501feadc","Type":"ContainerStarted","Data":"25fd79fd2a3692cfc20a7a66cf051099dc6af9477252b576664ed0c3194c3bae"} Jan 31 15:00:59 crc kubenswrapper[4735]: I0131 15:00:59.972146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-64jbf" event={"ID":"805c24cb-0eea-458e-ae49-38eb501feadc","Type":"ContainerStarted","Data":"74c7b97bed763a1412cd8d231aac64dadcd52a98f624c4a15c1f1de3275e3b38"} Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.008491 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.010000 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.509982584 +0000 UTC m=+146.283311616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.012487 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.014524 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.514512595 +0000 UTC m=+146.287841637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.023869 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" event={"ID":"c660ea67-0ecc-40af-8dd7-a4a50d350ee3","Type":"ContainerStarted","Data":"955c0f598c539e376b55dff959a7e196224b4c2c30df401fcbd2f5c74ee0dec0"} Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.055875 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkjjj" event={"ID":"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07","Type":"ContainerStarted","Data":"35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b"} Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.085686 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" podStartSLOduration=123.085664736 podStartE2EDuration="2m3.085664736s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.078383027 +0000 UTC m=+145.851712079" watchObservedRunningTime="2026-01-31 15:01:00.085664736 +0000 UTC m=+145.858993778" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.096210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" event={"ID":"e643acf0-b846-46cc-b067-dbfd708f50ee","Type":"ContainerStarted","Data":"2f43759e445ab9214037a5025439daed66e7b6d5c135f75be554683bc6865370"} Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.113723 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.117480 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.617433362 +0000 UTC m=+146.390762414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.125255 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.216379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.219700 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.719683901 +0000 UTC m=+146.493012943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.304606 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-r5ppg"] Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.317633 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.318047 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.818025536 +0000 UTC m=+146.591354578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.403947 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.419707 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.420317 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:00.920299865 +0000 UTC m=+146.693628907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.429308 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mqc92" podStartSLOduration=123.429287254 podStartE2EDuration="2m3.429287254s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.351985065 +0000 UTC m=+146.125314127" watchObservedRunningTime="2026-01-31 15:01:00.429287254 +0000 UTC m=+146.202616296" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.499553 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf"] Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.502935 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7"] Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.523056 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.523634 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.023592413 +0000 UTC m=+146.796921465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.552491 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r62v9" podStartSLOduration=122.552471016 podStartE2EDuration="2m2.552471016s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.543979101 +0000 UTC m=+146.317308153" watchObservedRunningTime="2026-01-31 15:01:00.552471016 +0000 UTC m=+146.325800058" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.556066 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49"] Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.591623 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" podStartSLOduration=122.591587564 podStartE2EDuration="2m2.591587564s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.582668776 +0000 UTC m=+146.355997818" watchObservedRunningTime="2026-01-31 15:01:00.591587564 +0000 UTC m=+146.364916606" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.604912 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.604987 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.626514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.626974 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.126957123 +0000 UTC m=+146.900286165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: W0131 15:01:00.667831 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2037333_026d_4944_8810_18e892c44792.slice/crio-f3679292bd604267569750d0a61ebe4674ca962f510d87b7df2b7785751533af WatchSource:0}: Error finding container f3679292bd604267569750d0a61ebe4674ca962f510d87b7df2b7785751533af: Status 404 returned error can't find the container with id f3679292bd604267569750d0a61ebe4674ca962f510d87b7df2b7785751533af Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.731065 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.731436 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.231405865 +0000 UTC m=+147.004734907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.772562 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nfk5g" podStartSLOduration=123.772540311 podStartE2EDuration="2m3.772540311s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.742152685 +0000 UTC m=+146.515481727" watchObservedRunningTime="2026-01-31 15:01:00.772540311 +0000 UTC m=+146.545869353" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.855302 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.855841 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.355820102 +0000 UTC m=+147.129149144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.856975 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kkkpm" podStartSLOduration=122.856938754 podStartE2EDuration="2m2.856938754s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.855959706 +0000 UTC m=+146.629288748" watchObservedRunningTime="2026-01-31 15:01:00.856938754 +0000 UTC m=+146.630267806" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.857836 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v95j9" podStartSLOduration=122.85782594 podStartE2EDuration="2m2.85782594s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.791387054 +0000 UTC m=+146.564716096" watchObservedRunningTime="2026-01-31 15:01:00.85782594 +0000 UTC m=+146.631154982" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.934594 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hkkvl"] Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.935841 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" podStartSLOduration=123.935831269 podStartE2EDuration="2m3.935831269s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.892228072 +0000 UTC m=+146.665557114" watchObservedRunningTime="2026-01-31 15:01:00.935831269 +0000 UTC m=+146.709160311" Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.956078 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.960015 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qv4fk" podStartSLOduration=123.959995196 podStartE2EDuration="2m3.959995196s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:00.952129039 +0000 UTC m=+146.725458081" watchObservedRunningTime="2026-01-31 15:01:00.959995196 +0000 UTC m=+146.733324238" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.965956 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.465918757 +0000 UTC m=+147.239247799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:00 crc kubenswrapper[4735]: I0131 15:01:00.966147 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:00 crc kubenswrapper[4735]: E0131 15:01:00.966496 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.466489163 +0000 UTC m=+147.239818205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.054645 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-64jbf" podStartSLOduration=123.054625394 podStartE2EDuration="2m3.054625394s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.000598637 +0000 UTC m=+146.773927689" watchObservedRunningTime="2026-01-31 15:01:01.054625394 +0000 UTC m=+146.827954426" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.071377 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.071788 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.571767209 +0000 UTC m=+147.345096251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.171464 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.174555 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.175194 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.67516884 +0000 UTC m=+147.448497882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.177694 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.246532 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.255942 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.266782 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7b2f" event={"ID":"e643acf0-b846-46cc-b067-dbfd708f50ee","Type":"ContainerStarted","Data":"bd126ba3b63cbeed257cb44c2f75f7f9cfa844abd9e674a674c955aaa5fb704f"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.272935 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djr57"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.279918 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.280318 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.780297681 +0000 UTC m=+147.553626723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.286358 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" event={"ID":"22de0c73-39eb-46e4-aa76-3b0bb86e327b","Type":"ContainerStarted","Data":"9c29fab698a63faf4e4053f37ad65ddd30d21c1b04b4d69b4bc113cade3c009c"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.309934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" event={"ID":"f4f37354-52b0-4164-a33c-65aa16618732","Type":"ContainerStarted","Data":"62ddd89416e7c87d1bfd2707fddef14e919a3413f72a5cfa7b240221a08b0f74"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.309979 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" event={"ID":"f4f37354-52b0-4164-a33c-65aa16618732","Type":"ContainerStarted","Data":"f81e7e90b77a7a9192e9897e3da95cefda7efa8cf7775e732825fb9a8feed6dc"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.356578 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c4n8t" event={"ID":"9c7cc886-4abd-48f6-8256-67b5011f9cb5","Type":"ContainerStarted","Data":"b5ceed6e81cc97354df7c0e9ef98d96e8a2cecf0c1a77d1d73851dfd1a052d8a"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.359682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" event={"ID":"977c9314-7448-48b4-acd7-583385fd7138","Type":"ContainerStarted","Data":"b8c994241e24d7f6f95a38af591ed254c4465a5d254b7b73bbc671884f44fcf3"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.365529 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" podStartSLOduration=124.365517598 podStartE2EDuration="2m4.365517598s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.365281791 +0000 UTC m=+147.138610843" watchObservedRunningTime="2026-01-31 15:01:01.365517598 +0000 UTC m=+147.138846650" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.381599 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.382985 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:01.882957201 +0000 UTC m=+147.656286243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.386775 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" event={"ID":"49baed04-a186-4573-b913-9e00661a18a3","Type":"ContainerStarted","Data":"cceda3bd00365a908d599c0a548aee0e9de4fbfaf91832da47ad693cb48b6d43"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.411355 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:01 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:01 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:01 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.411450 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.427052 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-c4n8t" podStartSLOduration=6.427028532 podStartE2EDuration="6.427028532s" podCreationTimestamp="2026-01-31 15:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.40720853 +0000 UTC m=+147.180537572" watchObservedRunningTime="2026-01-31 15:01:01.427028532 +0000 UTC m=+147.200357574" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.427093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r5ppg" event={"ID":"12afefa5-e59c-482f-97fc-00499ee3a1c2","Type":"ContainerStarted","Data":"7b8460aef97bd75622460111c9ce1e76189a7280811a9c50b36cb1e187dfa455"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.431500 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.431525 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.432285 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f"] Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.448631 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" event={"ID":"52d5f4fc-bb86-426a-b56e-810e4ffc1315","Type":"ContainerStarted","Data":"88e8b47bc300a7993b3b59ba8681343a9d4c12ed6c913719894330d6322d0b62"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.448696 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" event={"ID":"52d5f4fc-bb86-426a-b56e-810e4ffc1315","Type":"ContainerStarted","Data":"8c779d0877838cadb7ca964397fe7ea162e4e0dcf403ddb7f0e11c0efa6b5397"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.496851 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" event={"ID":"93789125-4788-45cd-bd8d-be348946b798","Type":"ContainerStarted","Data":"3c7d69aec0814f6ff11af4678db5617fe5c491d8ba4eaa77dd33c4245e4be72e"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.496926 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" event={"ID":"93789125-4788-45cd-bd8d-be348946b798","Type":"ContainerStarted","Data":"9c0b554aa4a36583128e9aea9485b23148aba9e760fbda4000ee5a1955cacdee"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.510745 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.512689 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.012664701 +0000 UTC m=+147.785993743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.513792 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.516763 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.016752369 +0000 UTC m=+147.790081411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.549219 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nf8zl" podStartSLOduration=123.549193654 podStartE2EDuration="2m3.549193654s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.499131531 +0000 UTC m=+147.272460583" watchObservedRunningTime="2026-01-31 15:01:01.549193654 +0000 UTC m=+147.322522686" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.580776 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5mk5" podStartSLOduration=123.580751504 podStartE2EDuration="2m3.580751504s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.547026752 +0000 UTC m=+147.320355794" watchObservedRunningTime="2026-01-31 15:01:01.580751504 +0000 UTC m=+147.354080546" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.611088 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" event={"ID":"8935f1ed-7ef3-4719-865c-aab9b67e75da","Type":"ContainerStarted","Data":"3c38ce2c7d42be9504297f3d8680bbfa4105a0d31c0677d67a0cb7f0413aea23"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.611144 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.611162 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" event={"ID":"8935f1ed-7ef3-4719-865c-aab9b67e75da","Type":"ContainerStarted","Data":"f80f309f5a91db5dd4227398002224178284f28fe3d3c31480847454d60d78d5"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.611175 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.611244 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.616081 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.617585 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.117554375 +0000 UTC m=+147.890883417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.619605 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" event={"ID":"f0a6bac8-059b-4a0a-8aa3-589a921d20a9","Type":"ContainerStarted","Data":"b83f5abb23967596638de7b6cba2f7c62872df4067a4a118ff7627436da45fe9"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.620602 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.670175 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-v8xhb" event={"ID":"da2f588d-a855-4df4-b3ab-fba7111c974c","Type":"ContainerStarted","Data":"406a670a7570432cc27fd593d39913e0a003c267880e56ffa2965d3b80421e78"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.670227 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-v8xhb" event={"ID":"da2f588d-a855-4df4-b3ab-fba7111c974c","Type":"ContainerStarted","Data":"f70c345a2297056ceceec329bfeab1672bfb2fa2c9075de66678a68df1fcb021"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.677314 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" podStartSLOduration=124.677290937 podStartE2EDuration="2m4.677290937s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.676251907 +0000 UTC m=+147.449580949" watchObservedRunningTime="2026-01-31 15:01:01.677290937 +0000 UTC m=+147.450619979" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.684776 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" event={"ID":"e2037333-026d-4944-8810-18e892c44792","Type":"ContainerStarted","Data":"f3679292bd604267569750d0a61ebe4674ca962f510d87b7df2b7785751533af"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.709246 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" event={"ID":"5b6743de-3bcf-49a3-b8ec-a1dbaf72293c","Type":"ContainerStarted","Data":"17fb9e28c07774b7bd38c5d2f37bed64f5c93d9aa54059e3d9f87edd657eee9e"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.718875 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-v8xhb" podStartSLOduration=6.718851766 podStartE2EDuration="6.718851766s" podCreationTimestamp="2026-01-31 15:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.71830385 +0000 UTC m=+147.491632892" watchObservedRunningTime="2026-01-31 15:01:01.718851766 +0000 UTC m=+147.492180808" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.719540 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.721068 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.221056959 +0000 UTC m=+147.994386001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.736689 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkjjj" event={"ID":"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07","Type":"ContainerStarted","Data":"0720891a1df7789e582756a0cff91f654f9f3c93704ac86f97123dbc0908be67"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.739976 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" event={"ID":"dc00da9a-bbf1-44ac-b70a-a04198031e2b","Type":"ContainerStarted","Data":"9a40d51d1e5d481b5c629d520ef33dd9d9a92acde0bdc0790e7ab3ee8189db71"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.741590 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.741803 4735 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8fshf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.741838 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" podUID="dc00da9a-bbf1-44ac-b70a-a04198031e2b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.756672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" event={"ID":"0ae31b9d-dde8-465b-9b2e-e81832178125","Type":"ContainerStarted","Data":"23db9c7f7704ae2e4f057a89f0b6471d6f7b84a415e6314953af4f148b984b2d"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.757694 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.759278 4735 patch_prober.go:28] interesting pod/console-operator-58897d9998-rlq9z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.759312 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" podUID="0ae31b9d-dde8-465b-9b2e-e81832178125" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.759955 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7jvwd" podStartSLOduration=123.75992273 podStartE2EDuration="2m3.75992273s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.758143299 +0000 UTC m=+147.531472341" watchObservedRunningTime="2026-01-31 15:01:01.75992273 +0000 UTC m=+147.533251772" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.765841 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" event={"ID":"dd3f921e-1695-4e8e-acd1-eff2a5981dac","Type":"ContainerStarted","Data":"4c413aa789a97f1f4bcac7d3858dd61e28670297e2dc54883f2501a46ad32300"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.766872 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.770599 4735 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p8j49 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.770680 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" podUID="dd3f921e-1695-4e8e-acd1-eff2a5981dac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.794537 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" event={"ID":"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0","Type":"ContainerStarted","Data":"0384d1de5b1a691444745dd77d1b663834ae2979e0b6c84125aba4db19f74d71"} Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.795473 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-qv4fk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.795558 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qv4fk" podUID="67e6ada3-c08d-4b54-a678-379a25e72a15" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.820516 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.822526 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.322413162 +0000 UTC m=+148.095742204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.825474 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2bslq" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.825594 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.825981 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.859482 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" podStartSLOduration=124.8594638 podStartE2EDuration="2m4.8594638s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.858722709 +0000 UTC m=+147.632051771" watchObservedRunningTime="2026-01-31 15:01:01.8594638 +0000 UTC m=+147.632792842" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.860203 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kkjjj" podStartSLOduration=124.860194871 podStartE2EDuration="2m4.860194871s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.812757723 +0000 UTC m=+147.586086775" watchObservedRunningTime="2026-01-31 15:01:01.860194871 +0000 UTC m=+147.633523913" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.862375 4735 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xxhdb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.862445 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" podUID="f4f37354-52b0-4164-a33c-65aa16618732" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.926298 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:01 crc kubenswrapper[4735]: E0131 15:01:01.928553 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.428537852 +0000 UTC m=+148.201866894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:01 crc kubenswrapper[4735]: I0131 15:01:01.975166 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" podStartSLOduration=123.975141055 podStartE2EDuration="2m3.975141055s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.927198313 +0000 UTC m=+147.700527365" watchObservedRunningTime="2026-01-31 15:01:01.975141055 +0000 UTC m=+147.748470097" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.033385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.034236 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.534215879 +0000 UTC m=+148.307544921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.097864 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" podStartSLOduration=124.097842033 podStartE2EDuration="2m4.097842033s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:02.096276558 +0000 UTC m=+147.869605600" watchObservedRunningTime="2026-01-31 15:01:02.097842033 +0000 UTC m=+147.871171075" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.135259 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.144935 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.64491438 +0000 UTC m=+148.418243432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.248223 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.248682 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.748656761 +0000 UTC m=+148.521985803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.248950 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.249385 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.749368072 +0000 UTC m=+148.522697114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.350402 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.353308 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.853257957 +0000 UTC m=+148.626587009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.354764 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.358592 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.858576221 +0000 UTC m=+148.631905263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.414776 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:02 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:02 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:02 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.414827 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.456731 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.456917 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.956879565 +0000 UTC m=+148.730208607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.457439 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.457763 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:02.95775246 +0000 UTC m=+148.731081502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.559210 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.559557 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.059542505 +0000 UTC m=+148.832871547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.660956 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.661333 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.16132109 +0000 UTC m=+148.934650132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.762257 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.762509 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.262487667 +0000 UTC m=+149.035816709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.762650 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.762996 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.262987401 +0000 UTC m=+149.036316443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.800196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" event={"ID":"70d59cce-34b9-480a-82df-5c6303374dd8","Type":"ContainerStarted","Data":"c748551e1c30b705c1b87898b3ac79d4877f89eca39129388d948c69f412f4dd"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.800240 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" event={"ID":"70d59cce-34b9-480a-82df-5c6303374dd8","Type":"ContainerStarted","Data":"6373ec86998d08e8d25d2789735ba7177e23834abfd973927e7e6db002ea700c"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.801228 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.802519 4735 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dwl27 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.802558 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" podUID="70d59cce-34b9-480a-82df-5c6303374dd8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.803244 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" event={"ID":"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40","Type":"ContainerStarted","Data":"f44fe6a05868bc8dc06b70217915f0a6c8b559cfcf4f01bd7f0eb7b0d63b13af"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.803269 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" event={"ID":"a2f320fe-c0ab-4ca4-a6d7-5d26755cdb40","Type":"ContainerStarted","Data":"076820b1086b759985f8546b5def04bdf1958e8441a169b92ae1091d0365c336"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.804932 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" event={"ID":"22de0c73-39eb-46e4-aa76-3b0bb86e327b","Type":"ContainerStarted","Data":"f4bc9601e952d12489d33e2899f4c4757280f45f232cf7e30ed696a6d0ad1642"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.808070 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" event={"ID":"e2037333-026d-4944-8810-18e892c44792","Type":"ContainerStarted","Data":"6fe5ef7cc5adaccdd1f474b5a18c8b36987542467d551061818acbcdb65e699a"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.808103 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" event={"ID":"e2037333-026d-4944-8810-18e892c44792","Type":"ContainerStarted","Data":"16fa6dfac8d2a6ce8273604e56e5e4c1cfb9a1789dd16f86a1f66594348877c3"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.811152 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" event={"ID":"ffc668ac-7281-4425-8430-529b4e476483","Type":"ContainerStarted","Data":"cd24137ade04d71a3993ea9176f211e474fe48ac64f1d6415910c861a3222401"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.811181 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" event={"ID":"ffc668ac-7281-4425-8430-529b4e476483","Type":"ContainerStarted","Data":"fb9e5d285b7926414ec11f200381a82e801e9c4fb8cf955dc470305ee2aba8e2"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.811839 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.813098 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-djr57 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.813156 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.814670 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" event={"ID":"ddfbcf8e-6bfe-4b83-a244-ed2c44c6c5b0","Type":"ContainerStarted","Data":"7ce47500c50b7661c16125f2b2cb12831d95adb884402678a3c0f7a19978141a"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.816726 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" event={"ID":"1f8aafc5-01be-4e71-a217-6986de9a8f08","Type":"ContainerStarted","Data":"dc58fea4a24181d99c07ace04378e75e8378a019a4618223cdc9db8238743c94"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.816751 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" event={"ID":"1f8aafc5-01be-4e71-a217-6986de9a8f08","Type":"ContainerStarted","Data":"78aee6ae8e2319f8351917d466a5570a18a936fa86dab76187b7120757f05f70"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.816762 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" event={"ID":"1f8aafc5-01be-4e71-a217-6986de9a8f08","Type":"ContainerStarted","Data":"72d497780424302b302fb702f679ad02fb12d3a11f40f7e454e6fa7f4ff79ea6"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.818828 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" event={"ID":"00174b9a-5905-4611-943b-3652273b31b5","Type":"ContainerStarted","Data":"a19acc04d81e8919d7ff14bd20bba0a5c6da54c4b2ea1327bacc352a672efbf4"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.818855 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" event={"ID":"00174b9a-5905-4611-943b-3652273b31b5","Type":"ContainerStarted","Data":"318b560aa87832f3febb4f59e6b6c8d9d3cef5884b6b82d88fcc959fe466e100"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.818867 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" event={"ID":"00174b9a-5905-4611-943b-3652273b31b5","Type":"ContainerStarted","Data":"7a49f2f6571adf0c9b40b175396b1e315e3d793e8b281320b7294c47bd75e54e"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.820325 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" event={"ID":"977c9314-7448-48b4-acd7-583385fd7138","Type":"ContainerStarted","Data":"e20a995ac68e9e64fd7014d84ef851d13f79d64402df53ae7459a3b3b4fef09f"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.823850 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" event={"ID":"ff4c4eb9-3ad4-47b0-a47a-974b41396828","Type":"ContainerStarted","Data":"59010375bf10df82aee38f334d9997abd14074b24aac5b271aabe69dcac2bfe5"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.824145 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" event={"ID":"ff4c4eb9-3ad4-47b0-a47a-974b41396828","Type":"ContainerStarted","Data":"d55e94d501b91a9ad7f988fe9c848bed3b7965264e38ddf1bd8bddc30eb92c9f"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.826289 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" event={"ID":"5d06f51c-fb29-4011-9168-ff8321e05dd9","Type":"ContainerStarted","Data":"b3e19bc82b08835fd0287628ea15291625c7e9ede44c4e3b7bcc07b8abe35e0b"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.826347 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" event={"ID":"5d06f51c-fb29-4011-9168-ff8321e05dd9","Type":"ContainerStarted","Data":"89541cdcc91a476233886f99af943bc1f0e28b9653f2983b96f51f35eb0e407a"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.826371 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" event={"ID":"5d06f51c-fb29-4011-9168-ff8321e05dd9","Type":"ContainerStarted","Data":"45538f6bd9cc19263a1af8057a26ae58f2a06ca4bc368e49d87df47594d45e7f"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.826618 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.827635 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" event={"ID":"7640c224-4041-45b4-9755-e8b091d0f7c9","Type":"ContainerStarted","Data":"e0eff0acb2fdcefcc9f4b1da9a7ea19ab4f6033deb9bd2611d0e9296fd1f5108"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.827657 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" event={"ID":"7640c224-4041-45b4-9755-e8b091d0f7c9","Type":"ContainerStarted","Data":"ebf5fac0c8ecd301d9757b6f733795ab861d320c79ec8bd5c011fd086a72384c"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.827668 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" event={"ID":"7640c224-4041-45b4-9755-e8b091d0f7c9","Type":"ContainerStarted","Data":"ed7c8ab66ca5d228b552e037b3829c43765fa9e7d284882b3fc0d9a797756b11"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.830831 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" event={"ID":"8935f1ed-7ef3-4719-865c-aab9b67e75da","Type":"ContainerStarted","Data":"f14527c175490b6f5790bbf99742e2b1aef93823586e234aa3f7f6d0ddc44790"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.833684 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" event={"ID":"49baed04-a186-4573-b913-9e00661a18a3","Type":"ContainerStarted","Data":"d94281fac4309a5ca97d472ffae1471ea74c35fb8442828c0e0695b42e286dd5"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.835176 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" event={"ID":"dd3f921e-1695-4e8e-acd1-eff2a5981dac","Type":"ContainerStarted","Data":"1dfd0e2560202df9267db8fd3ba8ee49808e4f9bcd92a9849b84148987fffbc9"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.835928 4735 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p8j49 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.835965 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" podUID="dd3f921e-1695-4e8e-acd1-eff2a5981dac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.841993 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r5ppg" event={"ID":"12afefa5-e59c-482f-97fc-00499ee3a1c2","Type":"ContainerStarted","Data":"d2dcfe68a59cbcb2e739c4dc0cbc7e499be7382be63219b1d658a6538d400e80"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.842039 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-r5ppg" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.842051 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-r5ppg" event={"ID":"12afefa5-e59c-482f-97fc-00499ee3a1c2","Type":"ContainerStarted","Data":"53d8b1c0310980d380873c25cdce6f6fa6904127883cfe89279055495fcf1f98"} Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.844643 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-qv4fk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.844647 4735 patch_prober.go:28] interesting pod/console-operator-58897d9998-rlq9z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.844715 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qv4fk" podUID="67e6ada3-c08d-4b54-a678-379a25e72a15" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.844804 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" podUID="0ae31b9d-dde8-465b-9b2e-e81832178125" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.846554 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" podStartSLOduration=124.84653963 podStartE2EDuration="2m4.84653963s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:02.844895323 +0000 UTC m=+148.618224375" watchObservedRunningTime="2026-01-31 15:01:02.84653963 +0000 UTC m=+148.619868672" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.864029 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.866103 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.366086314 +0000 UTC m=+149.139415356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.874766 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qsdp7" podStartSLOduration=124.874749903 podStartE2EDuration="2m4.874749903s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:02.874525737 +0000 UTC m=+148.647854789" watchObservedRunningTime="2026-01-31 15:01:02.874749903 +0000 UTC m=+148.648078945" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.895475 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8fshf" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.957522 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wv8jn" podStartSLOduration=124.9575053 podStartE2EDuration="2m4.9575053s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:02.955145271 +0000 UTC m=+148.728474333" watchObservedRunningTime="2026-01-31 15:01:02.9575053 +0000 UTC m=+148.730834342" Jan 31 15:01:02 crc kubenswrapper[4735]: I0131 15:01:02.966468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:02 crc kubenswrapper[4735]: E0131 15:01:02.991853 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.491839059 +0000 UTC m=+149.265168101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.034300 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4lb6b" podStartSLOduration=125.034278763 podStartE2EDuration="2m5.034278763s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.033308385 +0000 UTC m=+148.806637427" watchObservedRunningTime="2026-01-31 15:01:03.034278763 +0000 UTC m=+148.807607805" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.036160 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hkkvl" podStartSLOduration=125.036153607 podStartE2EDuration="2m5.036153607s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.020391833 +0000 UTC m=+148.793720875" watchObservedRunningTime="2026-01-31 15:01:03.036153607 +0000 UTC m=+148.809482649" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.061253 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-s5lwz" podStartSLOduration=125.06122927 podStartE2EDuration="2m5.06122927s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.058962395 +0000 UTC m=+148.832291437" watchObservedRunningTime="2026-01-31 15:01:03.06122927 +0000 UTC m=+148.834558312" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.075090 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.076054 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.576013216 +0000 UTC m=+149.349342258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.078918 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzg7z" podStartSLOduration=125.07889795 podStartE2EDuration="2m5.07889795s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.077157509 +0000 UTC m=+148.850486551" watchObservedRunningTime="2026-01-31 15:01:03.07889795 +0000 UTC m=+148.852226992" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.121236 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" podStartSLOduration=125.121203589 podStartE2EDuration="2m5.121203589s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.119413508 +0000 UTC m=+148.892742570" watchObservedRunningTime="2026-01-31 15:01:03.121203589 +0000 UTC m=+148.894532631" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.181941 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" podStartSLOduration=63.181903099 podStartE2EDuration="1m3.181903099s" podCreationTimestamp="2026-01-31 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.179546242 +0000 UTC m=+148.952875304" watchObservedRunningTime="2026-01-31 15:01:03.181903099 +0000 UTC m=+148.955232141" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.183018 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-r5ppg" podStartSLOduration=8.183012451 podStartE2EDuration="8.183012451s" podCreationTimestamp="2026-01-31 15:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.155848838 +0000 UTC m=+148.929177890" watchObservedRunningTime="2026-01-31 15:01:03.183012451 +0000 UTC m=+148.956341493" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.183461 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.183918 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.683899577 +0000 UTC m=+149.457228619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.224028 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fhmhw" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.232081 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" podStartSLOduration=125.232061146 podStartE2EDuration="2m5.232061146s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.210733661 +0000 UTC m=+148.984062703" watchObservedRunningTime="2026-01-31 15:01:03.232061146 +0000 UTC m=+149.005390188" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.232543 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4sz7n" podStartSLOduration=125.232537969 podStartE2EDuration="2m5.232537969s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.231215291 +0000 UTC m=+149.004544343" watchObservedRunningTime="2026-01-31 15:01:03.232537969 +0000 UTC m=+149.005867001" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.259800 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5nc6f" podStartSLOduration=125.259781845 podStartE2EDuration="2m5.259781845s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.25890456 +0000 UTC m=+149.032233612" watchObservedRunningTime="2026-01-31 15:01:03.259781845 +0000 UTC m=+149.033110887" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.282817 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-brzfv" podStartSLOduration=125.282791148 podStartE2EDuration="2m5.282791148s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:03.281026338 +0000 UTC m=+149.054355400" watchObservedRunningTime="2026-01-31 15:01:03.282791148 +0000 UTC m=+149.056120190" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.291273 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.291742 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.791719856 +0000 UTC m=+149.565048898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.393857 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.394200 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.894173389 +0000 UTC m=+149.667502431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.405156 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:03 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:03 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:03 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.405249 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.495131 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.495349 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.995314835 +0000 UTC m=+149.768643877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.495618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.495972 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:03.995963874 +0000 UTC m=+149.769292916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.596691 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.596864 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.096836952 +0000 UTC m=+149.870165994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.597301 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.597716 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.097702957 +0000 UTC m=+149.871031999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.698469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.698848 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.198827693 +0000 UTC m=+149.972156725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.800037 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.800491 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.300472704 +0000 UTC m=+150.073801746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.848876 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" event={"ID":"49baed04-a186-4573-b913-9e00661a18a3","Type":"ContainerStarted","Data":"529698dce57591e440b4f76e49cf58c110dfa1a2896b7b4ed924bd453139d137"} Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.849993 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-djr57 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.850043 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.864948 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rlq9z" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.874911 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dwl27" Jan 31 15:01:03 crc kubenswrapper[4735]: I0131 15:01:03.900830 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:03 crc kubenswrapper[4735]: E0131 15:01:03.901442 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.401400874 +0000 UTC m=+150.174729916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.003529 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.024172 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.509635144 +0000 UTC m=+150.282964186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.110361 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.110849 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.610833752 +0000 UTC m=+150.384162784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.160166 4735 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.186605 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.187292 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.189020 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.190868 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.205885 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.246513 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.246939 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.746922376 +0000 UTC m=+150.520251418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.348125 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.348243 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.848223487 +0000 UTC m=+150.621552529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.348568 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.348633 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.349378 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.84936614 +0000 UTC m=+150.622695182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.349597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.349765 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.350114 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.405242 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:04 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:04 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:04 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.405332 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451165 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.451408 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.951375391 +0000 UTC m=+150.724704433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451502 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451587 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451693 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451722 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.451758 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.452134 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:04.952106552 +0000 UTC m=+150.725435744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.452866 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.462291 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.462311 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.463359 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.486548 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.497378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.504207 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.561986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.562405 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:05.062391182 +0000 UTC m=+150.835720224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.585434 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.598176 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.667288 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.667613 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:05.167601315 +0000 UTC m=+150.940930357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.739484 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8j49" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.774078 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.774551 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:05.274536508 +0000 UTC m=+151.047865540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.876084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.876385 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 15:01:05.376375005 +0000 UTC m=+151.149704047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t7kmx" (UID: "e924aff1-607d-40b9-91a4-14813ff15844") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.897903 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" event={"ID":"49baed04-a186-4573-b913-9e00661a18a3","Type":"ContainerStarted","Data":"edd30d8135c580ea8c7cef3ed24d774fd7d5c9f240bd0fdb3a9e774304183caf"} Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.897939 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" event={"ID":"49baed04-a186-4573-b913-9e00661a18a3","Type":"ContainerStarted","Data":"e221a470f1adc4ff8d10891f4344582fab0dce9d83c51e4b45dd9023d3d22c60"} Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.899389 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-djr57 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.899512 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.972963 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8r8s6" podStartSLOduration=9.972935559 podStartE2EDuration="9.972935559s" podCreationTimestamp="2026-01-31 15:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:04.97088476 +0000 UTC m=+150.744213812" watchObservedRunningTime="2026-01-31 15:01:04.972935559 +0000 UTC m=+150.746264601" Jan 31 15:01:04 crc kubenswrapper[4735]: I0131 15:01:04.980046 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:04 crc kubenswrapper[4735]: E0131 15:01:04.981970 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 15:01:05.481943539 +0000 UTC m=+151.255272581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.026186 4735 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T15:01:04.160205776Z","Handler":null,"Name":""} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.031723 4735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.031788 4735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.044975 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.081829 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.091272 4735 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.091323 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.098344 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgtqr"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.105759 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.108798 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.124818 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgtqr"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.163924 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t7kmx\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.182439 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.182881 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.182919 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.182947 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfqq\" (UniqueName: \"kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.211802 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4735]: W0131 15:01:05.274900 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-092e70bdad46684a4ade05a062f545c4d91cb361c790bc4ebe21f12ac3085cf6 WatchSource:0}: Error finding container 092e70bdad46684a4ade05a062f545c4d91cb361c790bc4ebe21f12ac3085cf6: Status 404 returned error can't find the container with id 092e70bdad46684a4ade05a062f545c4d91cb361c790bc4ebe21f12ac3085cf6 Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.278854 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vk2gn"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.280198 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.283722 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.283763 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.283789 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sfqq\" (UniqueName: \"kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.284985 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.285247 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.288544 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.293059 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk2gn"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.303343 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.308683 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sfqq\" (UniqueName: \"kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq\") pod \"community-operators-bgtqr\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: W0131 15:01:05.351861 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-3cacda00f4d4c234d86cb3cef8bf1f86c41f857583409a61f057713e4902187d WatchSource:0}: Error finding container 3cacda00f4d4c234d86cb3cef8bf1f86c41f857583409a61f057713e4902187d: Status 404 returned error can't find the container with id 3cacda00f4d4c234d86cb3cef8bf1f86c41f857583409a61f057713e4902187d Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.386074 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.386568 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq8c5\" (UniqueName: \"kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.386603 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.408272 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:05 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:05 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:05 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.408355 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.483021 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.492489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq8c5\" (UniqueName: \"kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.492560 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.492644 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.494856 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.495555 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.502010 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.503490 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.518020 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.529546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq8c5\" (UniqueName: \"kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5\") pod \"certified-operators-vk2gn\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.573678 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.606628 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.701319 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.702726 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.707384 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.707462 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.707495 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5q4s\" (UniqueName: \"kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.709090 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.730960 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808549 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808607 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sm7s\" (UniqueName: \"kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808641 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5q4s\" (UniqueName: \"kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808706 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808743 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.808888 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.809266 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.810155 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.839069 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5q4s\" (UniqueName: \"kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s\") pod \"community-operators-qgkxg\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.892570 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk2gn"] Jan 31 15:01:05 crc kubenswrapper[4735]: W0131 15:01:05.905282 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec5adbe8_b11c_4f8d_8689_b1034c9436ba.slice/crio-1ba8d05f1b00e1ea323c1694ab3f510d50491a840c3bd4bc090ca0ea0e80b439 WatchSource:0}: Error finding container 1ba8d05f1b00e1ea323c1694ab3f510d50491a840c3bd4bc090ca0ea0e80b439: Status 404 returned error can't find the container with id 1ba8d05f1b00e1ea323c1694ab3f510d50491a840c3bd4bc090ca0ea0e80b439 Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.905639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" event={"ID":"e924aff1-607d-40b9-91a4-14813ff15844","Type":"ContainerStarted","Data":"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.905716 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.905729 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" event={"ID":"e924aff1-607d-40b9-91a4-14813ff15844","Type":"ContainerStarted","Data":"50e9e1ba8a5b64a56911f229fabe007944ddef40d140c57a36a4c6b84b5f1cd0"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.907889 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"34bf03dde0c2fe4cc30d0528e94a0269958549c84d3b419787fdfa1b5254b651"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.907950 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3cacda00f4d4c234d86cb3cef8bf1f86c41f857583409a61f057713e4902187d"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.908333 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.910052 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sm7s\" (UniqueName: \"kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.910153 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.910195 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.910652 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.910971 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.916487 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0f63866a8896f63e75f5b31c3fbd048fa2393955fc15505613b6c8c5e9f0874b"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.916518 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"939beebbd7df2718c1fcf8b1d62b3439afd21a61a326b7697fb89d18de1b95b8"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.924199 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"91acaf3f522b096cbcffecd60eb96128fe3bfdfb243d02db7d7cf828a8a6ea1d"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.924241 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"092e70bdad46684a4ade05a062f545c4d91cb361c790bc4ebe21f12ac3085cf6"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.929118 4735 generic.go:334] "Generic (PLEG): container finished" podID="22de0c73-39eb-46e4-aa76-3b0bb86e327b" containerID="f4bc9601e952d12489d33e2899f4c4757280f45f232cf7e30ed696a6d0ad1642" exitCode=0 Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.929180 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" event={"ID":"22de0c73-39eb-46e4-aa76-3b0bb86e327b","Type":"ContainerDied","Data":"f4bc9601e952d12489d33e2899f4c4757280f45f232cf7e30ed696a6d0ad1642"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.932467 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2","Type":"ContainerStarted","Data":"f5446fce4bb1e5c558afe0fd417d7621ffa7398e04ed9bc7f219477072b8d365"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.932522 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2","Type":"ContainerStarted","Data":"5ba934003d70423526d9be4cba2e5cbc8b3c6e8cfeb86f66c355fceab201147b"} Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.935101 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" podStartSLOduration=127.93508631 podStartE2EDuration="2m7.93508631s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:05.932206607 +0000 UTC m=+151.705535659" watchObservedRunningTime="2026-01-31 15:01:05.93508631 +0000 UTC m=+151.708415352" Jan 31 15:01:05 crc kubenswrapper[4735]: I0131 15:01:05.941061 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sm7s\" (UniqueName: \"kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s\") pod \"certified-operators-xmx2w\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.043368 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.076453 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgtqr"] Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.103368 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.103349212 podStartE2EDuration="2.103349212s" podCreationTimestamp="2026-01-31 15:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:06.101497098 +0000 UTC m=+151.874826150" watchObservedRunningTime="2026-01-31 15:01:06.103349212 +0000 UTC m=+151.876678254" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.150884 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.403574 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:06 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:06 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:06 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.403615 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.437819 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.474400 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:01:06 crc kubenswrapper[4735]: W0131 15:01:06.495380 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f255f7_94e5_4129_adfb_10b19b294eef.slice/crio-2448c04358b84127f3d2b51d663daf0e444e3c86f1d0604cc0ba3f6c4f62c5b5 WatchSource:0}: Error finding container 2448c04358b84127f3d2b51d663daf0e444e3c86f1d0604cc0ba3f6c4f62c5b5: Status 404 returned error can't find the container with id 2448c04358b84127f3d2b51d663daf0e444e3c86f1d0604cc0ba3f6c4f62c5b5 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.837241 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.844541 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xxhdb" Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.946316 4735 generic.go:334] "Generic (PLEG): container finished" podID="bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" containerID="f5446fce4bb1e5c558afe0fd417d7621ffa7398e04ed9bc7f219477072b8d365" exitCode=0 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.946510 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2","Type":"ContainerDied","Data":"f5446fce4bb1e5c558afe0fd417d7621ffa7398e04ed9bc7f219477072b8d365"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.960432 4735 generic.go:334] "Generic (PLEG): container finished" podID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerID="1709dcfd2c2c942af6eeeb1acede9452dead35fa6ec51c0e47ff1fe3c7466b94" exitCode=0 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.960503 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerDied","Data":"1709dcfd2c2c942af6eeeb1acede9452dead35fa6ec51c0e47ff1fe3c7466b94"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.960835 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerStarted","Data":"1fd3d70b368097e681a90261a5427112033a3a44f470666c0ad131e0818ac5b3"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.963031 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.970085 4735 generic.go:334] "Generic (PLEG): container finished" podID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerID="2b4bbd6c0537d326651139cfe27a4259013bb6cf9b96aa51727765443037c373" exitCode=0 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.970285 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerDied","Data":"2b4bbd6c0537d326651139cfe27a4259013bb6cf9b96aa51727765443037c373"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.970509 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerStarted","Data":"1ba8d05f1b00e1ea323c1694ab3f510d50491a840c3bd4bc090ca0ea0e80b439"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.981569 4735 generic.go:334] "Generic (PLEG): container finished" podID="83f255f7-94e5-4129-adfb-10b19b294eef" containerID="0c650b60ecf0d46a6ee87ddc81b9da00f62eb4dc95b0528b03d5a476d4116464" exitCode=0 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.981664 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerDied","Data":"0c650b60ecf0d46a6ee87ddc81b9da00f62eb4dc95b0528b03d5a476d4116464"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.981699 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerStarted","Data":"2448c04358b84127f3d2b51d663daf0e444e3c86f1d0604cc0ba3f6c4f62c5b5"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.985485 4735 generic.go:334] "Generic (PLEG): container finished" podID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerID="079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5" exitCode=0 Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.986542 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerDied","Data":"079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5"} Jan 31 15:01:06 crc kubenswrapper[4735]: I0131 15:01:06.986747 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerStarted","Data":"cb38f9ee5bc53e1d715e32b2a5676f7b033f358b2154e4c6b3d1065adae30195"} Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.068773 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j55nq"] Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.077186 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.106677 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j55nq"] Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.107895 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.233504 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gng\" (UniqueName: \"kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.233581 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.233641 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.268204 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334433 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume\") pod \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334492 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsx8\" (UniqueName: \"kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8\") pod \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334589 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume\") pod \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\" (UID: \"22de0c73-39eb-46e4-aa76-3b0bb86e327b\") " Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334902 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gng\" (UniqueName: \"kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334940 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.334972 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.335640 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.336460 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume" (OuterVolumeSpecName: "config-volume") pod "22de0c73-39eb-46e4-aa76-3b0bb86e327b" (UID: "22de0c73-39eb-46e4-aa76-3b0bb86e327b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.337829 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.343858 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22de0c73-39eb-46e4-aa76-3b0bb86e327b" (UID: "22de0c73-39eb-46e4-aa76-3b0bb86e327b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.350085 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8" (OuterVolumeSpecName: "kube-api-access-5wsx8") pod "22de0c73-39eb-46e4-aa76-3b0bb86e327b" (UID: "22de0c73-39eb-46e4-aa76-3b0bb86e327b"). InnerVolumeSpecName "kube-api-access-5wsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.352551 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.352619 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.363720 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gng\" (UniqueName: \"kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng\") pod \"redhat-marketplace-j55nq\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.403092 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:07 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:07 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:07 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.403176 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.403452 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.436784 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22de0c73-39eb-46e4-aa76-3b0bb86e327b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.436821 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22de0c73-39eb-46e4-aa76-3b0bb86e327b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.436835 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wsx8\" (UniqueName: \"kubernetes.io/projected/22de0c73-39eb-46e4-aa76-3b0bb86e327b-kube-api-access-5wsx8\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.470371 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:01:07 crc kubenswrapper[4735]: E0131 15:01:07.470691 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22de0c73-39eb-46e4-aa76-3b0bb86e327b" containerName="collect-profiles" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.470714 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="22de0c73-39eb-46e4-aa76-3b0bb86e327b" containerName="collect-profiles" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.470832 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="22de0c73-39eb-46e4-aa76-3b0bb86e327b" containerName="collect-profiles" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.471903 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.482055 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.537874 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.538025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.538460 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v8d\" (UniqueName: \"kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.636511 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j55nq"] Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.639967 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.640046 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.640148 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7v8d\" (UniqueName: \"kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.641359 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.641717 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: W0131 15:01:07.644447 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ce2f8d_6ee0_4909_833c_72b84f64df15.slice/crio-991ecd63838a89b7766f81eb25e35c256827ef961ea91a96ff580ec3b04219d1 WatchSource:0}: Error finding container 991ecd63838a89b7766f81eb25e35c256827ef961ea91a96ff580ec3b04219d1: Status 404 returned error can't find the container with id 991ecd63838a89b7766f81eb25e35c256827ef961ea91a96ff580ec3b04219d1 Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.658278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7v8d\" (UniqueName: \"kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d\") pod \"redhat-marketplace-jltfp\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:07 crc kubenswrapper[4735]: I0131 15:01:07.801676 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.016823 4735 generic.go:334] "Generic (PLEG): container finished" podID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerID="0e6117b46d6144d1b17715dd06009511960a88475f0d537679e3d63b77167985" exitCode=0 Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.017102 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerDied","Data":"0e6117b46d6144d1b17715dd06009511960a88475f0d537679e3d63b77167985"} Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.017522 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerStarted","Data":"991ecd63838a89b7766f81eb25e35c256827ef961ea91a96ff580ec3b04219d1"} Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.029104 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.029173 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf" event={"ID":"22de0c73-39eb-46e4-aa76-3b0bb86e327b","Type":"ContainerDied","Data":"9c29fab698a63faf4e4053f37ad65ddd30d21c1b04b4d69b4bc113cade3c009c"} Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.029227 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c29fab698a63faf4e4053f37ad65ddd30d21c1b04b4d69b4bc113cade3c009c" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.156188 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-qv4fk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.156257 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qv4fk" podUID="67e6ada3-c08d-4b54-a678-379a25e72a15" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.156197 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-qv4fk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.156470 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qv4fk" podUID="67e6ada3-c08d-4b54-a678-379a25e72a15" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.260094 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.270293 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.271455 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: W0131 15:01:08.285684 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8637b2c6_4051_4b0e_967b_c6f4897c9f4a.slice/crio-d73d1ad59d8c663760f79672788d5f89f5c48d737ab4c8c5267313f9d997749a WatchSource:0}: Error finding container d73d1ad59d8c663760f79672788d5f89f5c48d737ab4c8c5267313f9d997749a: Status 404 returned error can't find the container with id d73d1ad59d8c663760f79672788d5f89f5c48d737ab4c8c5267313f9d997749a Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.274582 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.300974 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.358590 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.358650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g76w\" (UniqueName: \"kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.358698 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.400396 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.404463 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:08 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:08 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:08 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.404578 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.408190 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.436130 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.436411 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.438767 4735 patch_prober.go:28] interesting pod/console-f9d7485db-kkjjj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.438812 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kkjjj" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.460742 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.460824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g76w\" (UniqueName: \"kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.460863 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.461113 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.461599 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.491567 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g76w\" (UniqueName: \"kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w\") pod \"redhat-operators-tkdfl\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.561904 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir\") pod \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.561957 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access\") pod \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\" (UID: \"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2\") " Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.563550 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" (UID: "bbca3a93-f935-45cb-9bcf-d87ca4ef88d2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.567349 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" (UID: "bbca3a93-f935-45cb-9bcf-d87ca4ef88d2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.617774 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.666246 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.666289 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbca3a93-f935-45cb-9bcf-d87ca4ef88d2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.673933 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:01:08 crc kubenswrapper[4735]: E0131 15:01:08.674395 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" containerName="pruner" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.674462 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" containerName="pruner" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.674591 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbca3a93-f935-45cb-9bcf-d87ca4ef88d2" containerName="pruner" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.675629 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.679873 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.767092 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.767152 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9bj\" (UniqueName: \"kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.767193 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.805755 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.868928 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.868992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9bj\" (UniqueName: \"kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.869336 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.869963 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.870041 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:08 crc kubenswrapper[4735]: I0131 15:01:08.895243 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9bj\" (UniqueName: \"kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj\") pod \"redhat-operators-gqg4l\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.044635 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.049159 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.108209 4735 generic.go:334] "Generic (PLEG): container finished" podID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerID="202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056" exitCode=0 Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.108333 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerDied","Data":"202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056"} Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.108406 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerStarted","Data":"d73d1ad59d8c663760f79672788d5f89f5c48d737ab4c8c5267313f9d997749a"} Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.114332 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.114726 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbca3a93-f935-45cb-9bcf-d87ca4ef88d2","Type":"ContainerDied","Data":"5ba934003d70423526d9be4cba2e5cbc8b3c6e8cfeb86f66c355fceab201147b"} Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.114757 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba934003d70423526d9be4cba2e5cbc8b3c6e8cfeb86f66c355fceab201147b" Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.406843 4735 patch_prober.go:28] interesting pod/router-default-5444994796-64jbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 15:01:09 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 31 15:01:09 crc kubenswrapper[4735]: [+]process-running ok Jan 31 15:01:09 crc kubenswrapper[4735]: healthz check failed Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.407155 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-64jbf" podUID="805c24cb-0eea-458e-ae49-38eb501feadc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:01:09 crc kubenswrapper[4735]: I0131 15:01:09.698649 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.136180 4735 generic.go:334] "Generic (PLEG): container finished" podID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerID="9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501" exitCode=0 Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.136526 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerDied","Data":"9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501"} Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.136557 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerStarted","Data":"47d771ca3c6b922cbc1c1f09aecdc6d09e8ad820d364f01f8d969d75d0e9ed05"} Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.141315 4735 generic.go:334] "Generic (PLEG): container finished" podID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerID="2f01e81f4bb9ce58992e35a0fb6ef1e9fd1e4bf2012eb78d693e68d5be7153e0" exitCode=0 Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.141370 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerDied","Data":"2f01e81f4bb9ce58992e35a0fb6ef1e9fd1e4bf2012eb78d693e68d5be7153e0"} Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.141398 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerStarted","Data":"71948e41600f84da1d61138b108746f2c8663f2367be6075b4b532cd134faead"} Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.405962 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:01:10 crc kubenswrapper[4735]: I0131 15:01:10.409869 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-64jbf" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.262479 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.263295 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.270598 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.270829 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.282758 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.356502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.356616 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.463596 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.463725 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.465745 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.505033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:12 crc kubenswrapper[4735]: I0131 15:01:12.604170 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:13 crc kubenswrapper[4735]: I0131 15:01:13.198404 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 15:01:13 crc kubenswrapper[4735]: I0131 15:01:13.588054 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-r5ppg" Jan 31 15:01:14 crc kubenswrapper[4735]: I0131 15:01:14.211994 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d8c51b4-614a-4aea-9900-0b73b5abc522","Type":"ContainerStarted","Data":"19e5dd61eae6ede9b8a76bac86d767e8924765bf80ab40a429fdfc64cc2bbf0c"} Jan 31 15:01:14 crc kubenswrapper[4735]: I0131 15:01:14.212358 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d8c51b4-614a-4aea-9900-0b73b5abc522","Type":"ContainerStarted","Data":"431484753a2a2b9a39c83f5d3813ecd9a176c9064876f2af13aa26226e0452f7"} Jan 31 15:01:14 crc kubenswrapper[4735]: I0131 15:01:14.242545 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.242522183 podStartE2EDuration="2.242522183s" podCreationTimestamp="2026-01-31 15:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:14.229472887 +0000 UTC m=+160.002801929" watchObservedRunningTime="2026-01-31 15:01:14.242522183 +0000 UTC m=+160.015851225" Jan 31 15:01:15 crc kubenswrapper[4735]: I0131 15:01:15.254236 4735 generic.go:334] "Generic (PLEG): container finished" podID="8d8c51b4-614a-4aea-9900-0b73b5abc522" containerID="19e5dd61eae6ede9b8a76bac86d767e8924765bf80ab40a429fdfc64cc2bbf0c" exitCode=0 Jan 31 15:01:15 crc kubenswrapper[4735]: I0131 15:01:15.254312 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d8c51b4-614a-4aea-9900-0b73b5abc522","Type":"ContainerDied","Data":"19e5dd61eae6ede9b8a76bac86d767e8924765bf80ab40a429fdfc64cc2bbf0c"} Jan 31 15:01:18 crc kubenswrapper[4735]: I0131 15:01:18.160873 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qv4fk" Jan 31 15:01:18 crc kubenswrapper[4735]: I0131 15:01:18.440981 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:01:18 crc kubenswrapper[4735]: I0131 15:01:18.445119 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:01:20 crc kubenswrapper[4735]: I0131 15:01:20.039403 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:01:20 crc kubenswrapper[4735]: I0131 15:01:20.051446 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89cfa6-d46d-4cda-a91e-a1d06a743204-metrics-certs\") pod \"network-metrics-daemon-rqxxz\" (UID: \"ea89cfa6-d46d-4cda-a91e-a1d06a743204\") " pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:01:20 crc kubenswrapper[4735]: I0131 15:01:20.166130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqxxz" Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.496502 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.672781 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir\") pod \"8d8c51b4-614a-4aea-9900-0b73b5abc522\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.672830 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access\") pod \"8d8c51b4-614a-4aea-9900-0b73b5abc522\" (UID: \"8d8c51b4-614a-4aea-9900-0b73b5abc522\") " Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.674295 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d8c51b4-614a-4aea-9900-0b73b5abc522" (UID: "8d8c51b4-614a-4aea-9900-0b73b5abc522"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.692187 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d8c51b4-614a-4aea-9900-0b73b5abc522" (UID: "8d8c51b4-614a-4aea-9900-0b73b5abc522"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.774802 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d8c51b4-614a-4aea-9900-0b73b5abc522-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:21 crc kubenswrapper[4735]: I0131 15:01:21.774853 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d8c51b4-614a-4aea-9900-0b73b5abc522-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:22 crc kubenswrapper[4735]: I0131 15:01:22.351416 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8d8c51b4-614a-4aea-9900-0b73b5abc522","Type":"ContainerDied","Data":"431484753a2a2b9a39c83f5d3813ecd9a176c9064876f2af13aa26226e0452f7"} Jan 31 15:01:22 crc kubenswrapper[4735]: I0131 15:01:22.351493 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431484753a2a2b9a39c83f5d3813ecd9a176c9064876f2af13aa26226e0452f7" Jan 31 15:01:22 crc kubenswrapper[4735]: I0131 15:01:22.351549 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 15:01:24 crc kubenswrapper[4735]: I0131 15:01:24.225965 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqxxz"] Jan 31 15:01:25 crc kubenswrapper[4735]: I0131 15:01:25.310533 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:01:32 crc kubenswrapper[4735]: I0131 15:01:32.439364 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" event={"ID":"ea89cfa6-d46d-4cda-a91e-a1d06a743204","Type":"ContainerStarted","Data":"d73e08aa0831204d95faa7f83b63c2b7e86b5aeb83da40d639ec256242752995"} Jan 31 15:01:37 crc kubenswrapper[4735]: I0131 15:01:37.345725 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:37 crc kubenswrapper[4735]: I0131 15:01:37.347568 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:38 crc kubenswrapper[4735]: I0131 15:01:38.839754 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnxc7" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.572923 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.573371 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g76w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tkdfl_openshift-marketplace(71e12294-d2e7-417f-a2fc-376e10c34b08): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.574584 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tkdfl" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.576738 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.576840 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj9bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gqg4l_openshift-marketplace(31b1acb2-6a34-434b-9f48-1934d3eda307): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.578133 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gqg4l" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.668673 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.668832 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7v8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jltfp_openshift-marketplace(8637b2c6-4051-4b0e-967b-c6f4897c9f4a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:42 crc kubenswrapper[4735]: E0131 15:01:42.670013 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jltfp" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.049269 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gqg4l" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.049284 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jltfp" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.049721 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tkdfl" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.127589 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.127745 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5q4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qgkxg_openshift-marketplace(83f255f7-94e5-4129-adfb-10b19b294eef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:44 crc kubenswrapper[4735]: E0131 15:01:44.129446 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qgkxg" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" Jan 31 15:01:44 crc kubenswrapper[4735]: I0131 15:01:44.603250 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.508125 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qgkxg" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.622767 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.622905 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sfqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bgtqr_openshift-marketplace(013c1a0b-77d5-46f3-b90a-a6df449db6a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.623255 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.624198 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bgtqr" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.624332 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sm7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xmx2w_openshift-marketplace(da2ebe13-1049-4de9-9b68-19ebff67ff15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.626105 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xmx2w" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.635391 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.635571 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq8c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vk2gn_openshift-marketplace(ec5adbe8-b11c-4f8d-8689-b1034c9436ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:01:45 crc kubenswrapper[4735]: E0131 15:01:45.636798 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vk2gn" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.545396 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" event={"ID":"ea89cfa6-d46d-4cda-a91e-a1d06a743204","Type":"ContainerStarted","Data":"9e65af8167fb9f2c0d0f9dbbc7a8486307a5188e5259f033f88446afc8fa30bb"} Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.546265 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqxxz" event={"ID":"ea89cfa6-d46d-4cda-a91e-a1d06a743204","Type":"ContainerStarted","Data":"ade6c45231ef5268b35457c518bcb40031c8db5f3a872cf5bae2c1824e9345bd"} Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.559202 4735 generic.go:334] "Generic (PLEG): container finished" podID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerID="87721be408ff6d34947e4ec76966ab6598c2b3516b9f21b4e6af0ae4549e2523" exitCode=0 Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.559402 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerDied","Data":"87721be408ff6d34947e4ec76966ab6598c2b3516b9f21b4e6af0ae4549e2523"} Jan 31 15:01:46 crc kubenswrapper[4735]: E0131 15:01:46.561687 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xmx2w" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" Jan 31 15:01:46 crc kubenswrapper[4735]: E0131 15:01:46.561806 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vk2gn" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" Jan 31 15:01:46 crc kubenswrapper[4735]: E0131 15:01:46.571943 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bgtqr" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.579197 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rqxxz" podStartSLOduration=169.579181296 podStartE2EDuration="2m49.579181296s" podCreationTimestamp="2026-01-31 14:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:46.568613361 +0000 UTC m=+192.341942403" watchObservedRunningTime="2026-01-31 15:01:46.579181296 +0000 UTC m=+192.352510338" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.659684 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 15:01:46 crc kubenswrapper[4735]: E0131 15:01:46.660145 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8c51b4-614a-4aea-9900-0b73b5abc522" containerName="pruner" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.660160 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8c51b4-614a-4aea-9900-0b73b5abc522" containerName="pruner" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.660274 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8c51b4-614a-4aea-9900-0b73b5abc522" containerName="pruner" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.660689 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.666365 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.666815 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.683460 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.684124 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.684202 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.785443 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.785500 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.785627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.807346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:46 crc kubenswrapper[4735]: I0131 15:01:46.995559 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:47 crc kubenswrapper[4735]: I0131 15:01:47.344524 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 15:01:47 crc kubenswrapper[4735]: I0131 15:01:47.567096 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67f9503d-c47b-40cc-9b45-94e48f9d532a","Type":"ContainerStarted","Data":"e457c6b28a04740a7ee3d4a27173a739432313c4a8539fd72b52b3279414fa96"} Jan 31 15:01:47 crc kubenswrapper[4735]: I0131 15:01:47.571452 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerStarted","Data":"6cedb157762cef28340373b5b54fcf878555f6827da4f9e734cf26602d6e08c0"} Jan 31 15:01:47 crc kubenswrapper[4735]: I0131 15:01:47.592863 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j55nq" podStartSLOduration=1.629223858 podStartE2EDuration="40.592837442s" podCreationTimestamp="2026-01-31 15:01:07 +0000 UTC" firstStartedPulling="2026-01-31 15:01:08.026655505 +0000 UTC m=+153.799984547" lastFinishedPulling="2026-01-31 15:01:46.990269099 +0000 UTC m=+192.763598131" observedRunningTime="2026-01-31 15:01:47.591307888 +0000 UTC m=+193.364636940" watchObservedRunningTime="2026-01-31 15:01:47.592837442 +0000 UTC m=+193.366166484" Jan 31 15:01:48 crc kubenswrapper[4735]: I0131 15:01:48.576970 4735 generic.go:334] "Generic (PLEG): container finished" podID="67f9503d-c47b-40cc-9b45-94e48f9d532a" containerID="ad983ab80fa59f8854f1f2cd24d8d6ff065c86428844bfa451adebd27c3f3510" exitCode=0 Jan 31 15:01:48 crc kubenswrapper[4735]: I0131 15:01:48.577057 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67f9503d-c47b-40cc-9b45-94e48f9d532a","Type":"ContainerDied","Data":"ad983ab80fa59f8854f1f2cd24d8d6ff065c86428844bfa451adebd27c3f3510"} Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.791769 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.829343 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir\") pod \"67f9503d-c47b-40cc-9b45-94e48f9d532a\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.829473 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67f9503d-c47b-40cc-9b45-94e48f9d532a" (UID: "67f9503d-c47b-40cc-9b45-94e48f9d532a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.829521 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access\") pod \"67f9503d-c47b-40cc-9b45-94e48f9d532a\" (UID: \"67f9503d-c47b-40cc-9b45-94e48f9d532a\") " Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.829989 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f9503d-c47b-40cc-9b45-94e48f9d532a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.837775 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67f9503d-c47b-40cc-9b45-94e48f9d532a" (UID: "67f9503d-c47b-40cc-9b45-94e48f9d532a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:49 crc kubenswrapper[4735]: I0131 15:01:49.930608 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f9503d-c47b-40cc-9b45-94e48f9d532a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:50 crc kubenswrapper[4735]: I0131 15:01:50.588543 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67f9503d-c47b-40cc-9b45-94e48f9d532a","Type":"ContainerDied","Data":"e457c6b28a04740a7ee3d4a27173a739432313c4a8539fd72b52b3279414fa96"} Jan 31 15:01:50 crc kubenswrapper[4735]: I0131 15:01:50.588608 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e457c6b28a04740a7ee3d4a27173a739432313c4a8539fd72b52b3279414fa96" Jan 31 15:01:50 crc kubenswrapper[4735]: I0131 15:01:50.588694 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.453070 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 15:01:51 crc kubenswrapper[4735]: E0131 15:01:51.453778 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f9503d-c47b-40cc-9b45-94e48f9d532a" containerName="pruner" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.453796 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f9503d-c47b-40cc-9b45-94e48f9d532a" containerName="pruner" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.453925 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f9503d-c47b-40cc-9b45-94e48f9d532a" containerName="pruner" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.454476 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.497930 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.498353 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.503948 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.653456 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.653536 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.653567 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.755074 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.755225 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.755269 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.755265 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.755358 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.781388 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:51 crc kubenswrapper[4735]: I0131 15:01:51.818396 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:01:52 crc kubenswrapper[4735]: I0131 15:01:52.023318 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 15:01:52 crc kubenswrapper[4735]: I0131 15:01:52.602768 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d548798-4069-410a-816b-3aea3ff905c6","Type":"ContainerStarted","Data":"ac97c3e5382adfec694e29f439988ff1dd0d9e298868aa4f33f7b4d2e0db6a78"} Jan 31 15:01:52 crc kubenswrapper[4735]: I0131 15:01:52.603395 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d548798-4069-410a-816b-3aea3ff905c6","Type":"ContainerStarted","Data":"469d292cd6ed841ed65dee8501ba8b196632a868d06f83347a2583c43aa9b667"} Jan 31 15:01:52 crc kubenswrapper[4735]: I0131 15:01:52.621306 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.621282224 podStartE2EDuration="1.621282224s" podCreationTimestamp="2026-01-31 15:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:52.616754006 +0000 UTC m=+198.390083058" watchObservedRunningTime="2026-01-31 15:01:52.621282224 +0000 UTC m=+198.394611276" Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.384182 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.405195 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.405245 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.617284 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.630655 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerStarted","Data":"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90"} Jan 31 15:01:57 crc kubenswrapper[4735]: I0131 15:01:57.671806 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:01:58 crc kubenswrapper[4735]: I0131 15:01:58.637557 4735 generic.go:334] "Generic (PLEG): container finished" podID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerID="658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90" exitCode=0 Jan 31 15:01:58 crc kubenswrapper[4735]: I0131 15:01:58.637623 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerDied","Data":"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90"} Jan 31 15:01:58 crc kubenswrapper[4735]: I0131 15:01:58.643223 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerStarted","Data":"9c26bfb26967e42c1e8a27d9dab82c8098dc6a48bb7da3ed442840c810fae25c"} Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.649044 4735 generic.go:334] "Generic (PLEG): container finished" podID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerID="939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e" exitCode=0 Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.649135 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerDied","Data":"939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e"} Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.652310 4735 generic.go:334] "Generic (PLEG): container finished" podID="83f255f7-94e5-4129-adfb-10b19b294eef" containerID="9c26bfb26967e42c1e8a27d9dab82c8098dc6a48bb7da3ed442840c810fae25c" exitCode=0 Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.652396 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerDied","Data":"9c26bfb26967e42c1e8a27d9dab82c8098dc6a48bb7da3ed442840c810fae25c"} Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.654385 4735 generic.go:334] "Generic (PLEG): container finished" podID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerID="00d60ba7abfe3335a12529a62691837098247f9adc325c6985407b46f868a3f3" exitCode=0 Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.654458 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerDied","Data":"00d60ba7abfe3335a12529a62691837098247f9adc325c6985407b46f868a3f3"} Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.658520 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerStarted","Data":"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108"} Jan 31 15:01:59 crc kubenswrapper[4735]: I0131 15:01:59.730715 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqg4l" podStartSLOduration=2.831198894 podStartE2EDuration="51.730685433s" podCreationTimestamp="2026-01-31 15:01:08 +0000 UTC" firstStartedPulling="2026-01-31 15:01:10.139552555 +0000 UTC m=+155.912881597" lastFinishedPulling="2026-01-31 15:01:59.039039094 +0000 UTC m=+204.812368136" observedRunningTime="2026-01-31 15:01:59.727124522 +0000 UTC m=+205.500453584" watchObservedRunningTime="2026-01-31 15:01:59.730685433 +0000 UTC m=+205.504014525" Jan 31 15:02:00 crc kubenswrapper[4735]: I0131 15:02:00.670743 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerStarted","Data":"ced793d42a37a6b8b6426cb0194ffdbb5820514a8bf5214dea6940388e219498"} Jan 31 15:02:00 crc kubenswrapper[4735]: I0131 15:02:00.709014 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgtqr" podStartSLOduration=2.54222676 podStartE2EDuration="55.708990725s" podCreationTimestamp="2026-01-31 15:01:05 +0000 UTC" firstStartedPulling="2026-01-31 15:01:06.96276311 +0000 UTC m=+152.736092142" lastFinishedPulling="2026-01-31 15:02:00.129527065 +0000 UTC m=+205.902856107" observedRunningTime="2026-01-31 15:02:00.703823868 +0000 UTC m=+206.477152910" watchObservedRunningTime="2026-01-31 15:02:00.708990725 +0000 UTC m=+206.482319767" Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.678789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerStarted","Data":"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf"} Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.681445 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerStarted","Data":"521bf8d73cfd07cd4833fb15a738831b0951858ffd1c9867139bf7e63884a4e2"} Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.683566 4735 generic.go:334] "Generic (PLEG): container finished" podID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerID="aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7" exitCode=0 Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.683613 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerDied","Data":"aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7"} Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.685710 4735 generic.go:334] "Generic (PLEG): container finished" podID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerID="8f7e43ac28c681450cca8bb8a3f760d1736a4e91a625a483cb3c96913e7d82e0" exitCode=0 Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.685795 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerDied","Data":"8f7e43ac28c681450cca8bb8a3f760d1736a4e91a625a483cb3c96913e7d82e0"} Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.689619 4735 generic.go:334] "Generic (PLEG): container finished" podID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerID="67d44ede0d62d8fc240da1661558a443f2926d8ca59f6b7de6e3a4510f088d97" exitCode=0 Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.689665 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerDied","Data":"67d44ede0d62d8fc240da1661558a443f2926d8ca59f6b7de6e3a4510f088d97"} Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.706575 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jltfp" podStartSLOduration=3.189691717 podStartE2EDuration="54.706557263s" podCreationTimestamp="2026-01-31 15:01:07 +0000 UTC" firstStartedPulling="2026-01-31 15:01:09.150610492 +0000 UTC m=+154.923939534" lastFinishedPulling="2026-01-31 15:02:00.667476038 +0000 UTC m=+206.440805080" observedRunningTime="2026-01-31 15:02:01.704718101 +0000 UTC m=+207.478047143" watchObservedRunningTime="2026-01-31 15:02:01.706557263 +0000 UTC m=+207.479886305" Jan 31 15:02:01 crc kubenswrapper[4735]: I0131 15:02:01.777181 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgkxg" podStartSLOduration=3.161148028 podStartE2EDuration="56.777162894s" podCreationTimestamp="2026-01-31 15:01:05 +0000 UTC" firstStartedPulling="2026-01-31 15:01:06.982963742 +0000 UTC m=+152.756292794" lastFinishedPulling="2026-01-31 15:02:00.598978618 +0000 UTC m=+206.372307660" observedRunningTime="2026-01-31 15:02:01.773246533 +0000 UTC m=+207.546575575" watchObservedRunningTime="2026-01-31 15:02:01.777162894 +0000 UTC m=+207.550491936" Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.698519 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerStarted","Data":"946a2bfbf1d419294bf7fee3f36b37909a72b51d9489d74f30a7ee9792c26a32"} Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.700782 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerStarted","Data":"9e6540baea7826cdf1168bf929bdd646dbd65fe517c703930568edc9419d40d3"} Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.703284 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerStarted","Data":"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2"} Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.721961 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vk2gn" podStartSLOduration=2.630978633 podStartE2EDuration="57.721945346s" podCreationTimestamp="2026-01-31 15:01:05 +0000 UTC" firstStartedPulling="2026-01-31 15:01:06.978522834 +0000 UTC m=+152.751851886" lastFinishedPulling="2026-01-31 15:02:02.069489557 +0000 UTC m=+207.842818599" observedRunningTime="2026-01-31 15:02:02.720377232 +0000 UTC m=+208.493706294" watchObservedRunningTime="2026-01-31 15:02:02.721945346 +0000 UTC m=+208.495274388" Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.744778 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmx2w" podStartSLOduration=2.61449688 podStartE2EDuration="57.744759693s" podCreationTimestamp="2026-01-31 15:01:05 +0000 UTC" firstStartedPulling="2026-01-31 15:01:06.986913876 +0000 UTC m=+152.760242918" lastFinishedPulling="2026-01-31 15:02:02.117176689 +0000 UTC m=+207.890505731" observedRunningTime="2026-01-31 15:02:02.743086655 +0000 UTC m=+208.516415697" watchObservedRunningTime="2026-01-31 15:02:02.744759693 +0000 UTC m=+208.518088735" Jan 31 15:02:02 crc kubenswrapper[4735]: I0131 15:02:02.765821 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkdfl" podStartSLOduration=2.735930662 podStartE2EDuration="54.765804319s" podCreationTimestamp="2026-01-31 15:01:08 +0000 UTC" firstStartedPulling="2026-01-31 15:01:10.153144207 +0000 UTC m=+155.926473239" lastFinishedPulling="2026-01-31 15:02:02.183017854 +0000 UTC m=+207.956346896" observedRunningTime="2026-01-31 15:02:02.765185832 +0000 UTC m=+208.538514874" watchObservedRunningTime="2026-01-31 15:02:02.765804319 +0000 UTC m=+208.539133361" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.484816 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.485163 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.528880 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.608180 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.608244 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.656925 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:02:05 crc kubenswrapper[4735]: I0131 15:02:05.756263 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.044902 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.045232 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.096875 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.152518 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.152590 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.193653 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:06 crc kubenswrapper[4735]: I0131 15:02:06.795628 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.345825 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.345882 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.345929 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.346356 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.346475 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329" gracePeriod=600 Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.790493 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.802550 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:07 crc kubenswrapper[4735]: I0131 15:02:07.867548 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.193932 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.618383 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.619583 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.739965 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329" exitCode=0 Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.740201 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgkxg" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="registry-server" containerID="cri-o://521bf8d73cfd07cd4833fb15a738831b0951858ffd1c9867139bf7e63884a4e2" gracePeriod=2 Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.740280 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329"} Jan 31 15:02:08 crc kubenswrapper[4735]: I0131 15:02:08.793100 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.050724 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.051014 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.132804 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.660181 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tkdfl" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="registry-server" probeResult="failure" output=< Jan 31 15:02:09 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:02:09 crc kubenswrapper[4735]: > Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.748095 4735 generic.go:334] "Generic (PLEG): container finished" podID="83f255f7-94e5-4129-adfb-10b19b294eef" containerID="521bf8d73cfd07cd4833fb15a738831b0951858ffd1c9867139bf7e63884a4e2" exitCode=0 Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.748480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerDied","Data":"521bf8d73cfd07cd4833fb15a738831b0951858ffd1c9867139bf7e63884a4e2"} Jan 31 15:02:09 crc kubenswrapper[4735]: I0131 15:02:09.806904 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.598509 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.704777 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.760269 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271"} Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.763608 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgkxg" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.763593 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgkxg" event={"ID":"83f255f7-94e5-4129-adfb-10b19b294eef","Type":"ContainerDied","Data":"2448c04358b84127f3d2b51d663daf0e444e3c86f1d0604cc0ba3f6c4f62c5b5"} Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.763661 4735 scope.go:117] "RemoveContainer" containerID="521bf8d73cfd07cd4833fb15a738831b0951858ffd1c9867139bf7e63884a4e2" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.794467 4735 scope.go:117] "RemoveContainer" containerID="9c26bfb26967e42c1e8a27d9dab82c8098dc6a48bb7da3ed442840c810fae25c" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.818524 4735 scope.go:117] "RemoveContainer" containerID="0c650b60ecf0d46a6ee87ddc81b9da00f62eb4dc95b0528b03d5a476d4116464" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.899747 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities\") pod \"83f255f7-94e5-4129-adfb-10b19b294eef\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.899869 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content\") pod \"83f255f7-94e5-4129-adfb-10b19b294eef\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.899902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5q4s\" (UniqueName: \"kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s\") pod \"83f255f7-94e5-4129-adfb-10b19b294eef\" (UID: \"83f255f7-94e5-4129-adfb-10b19b294eef\") " Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.900927 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities" (OuterVolumeSpecName: "utilities") pod "83f255f7-94e5-4129-adfb-10b19b294eef" (UID: "83f255f7-94e5-4129-adfb-10b19b294eef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.909719 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s" (OuterVolumeSpecName: "kube-api-access-p5q4s") pod "83f255f7-94e5-4129-adfb-10b19b294eef" (UID: "83f255f7-94e5-4129-adfb-10b19b294eef"). InnerVolumeSpecName "kube-api-access-p5q4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:10 crc kubenswrapper[4735]: I0131 15:02:10.958591 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f255f7-94e5-4129-adfb-10b19b294eef" (UID: "83f255f7-94e5-4129-adfb-10b19b294eef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.000852 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.000888 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5q4s\" (UniqueName: \"kubernetes.io/projected/83f255f7-94e5-4129-adfb-10b19b294eef-kube-api-access-p5q4s\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.000902 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f255f7-94e5-4129-adfb-10b19b294eef-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.098115 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.106591 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgkxg"] Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.549484 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" path="/var/lib/kubelet/pods/83f255f7-94e5-4129-adfb-10b19b294eef/volumes" Jan 31 15:02:11 crc kubenswrapper[4735]: I0131 15:02:11.773720 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jltfp" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="registry-server" containerID="cri-o://4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf" gracePeriod=2 Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.195556 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.215956 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7v8d\" (UniqueName: \"kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d\") pod \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.216007 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content\") pod \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.216028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities\") pod \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\" (UID: \"8637b2c6-4051-4b0e-967b-c6f4897c9f4a\") " Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.216872 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities" (OuterVolumeSpecName: "utilities") pod "8637b2c6-4051-4b0e-967b-c6f4897c9f4a" (UID: "8637b2c6-4051-4b0e-967b-c6f4897c9f4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.223260 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d" (OuterVolumeSpecName: "kube-api-access-x7v8d") pod "8637b2c6-4051-4b0e-967b-c6f4897c9f4a" (UID: "8637b2c6-4051-4b0e-967b-c6f4897c9f4a"). InnerVolumeSpecName "kube-api-access-x7v8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.240124 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8637b2c6-4051-4b0e-967b-c6f4897c9f4a" (UID: "8637b2c6-4051-4b0e-967b-c6f4897c9f4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.317091 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7v8d\" (UniqueName: \"kubernetes.io/projected/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-kube-api-access-x7v8d\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.317314 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.317325 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8637b2c6-4051-4b0e-967b-c6f4897c9f4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.784693 4735 generic.go:334] "Generic (PLEG): container finished" podID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerID="4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf" exitCode=0 Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.784764 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerDied","Data":"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf"} Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.784808 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jltfp" event={"ID":"8637b2c6-4051-4b0e-967b-c6f4897c9f4a","Type":"ContainerDied","Data":"d73d1ad59d8c663760f79672788d5f89f5c48d737ab4c8c5267313f9d997749a"} Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.784837 4735 scope.go:117] "RemoveContainer" containerID="4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.785001 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jltfp" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.818273 4735 scope.go:117] "RemoveContainer" containerID="939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.846540 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.852499 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jltfp"] Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.852836 4735 scope.go:117] "RemoveContainer" containerID="202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.880844 4735 scope.go:117] "RemoveContainer" containerID="4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf" Jan 31 15:02:12 crc kubenswrapper[4735]: E0131 15:02:12.881341 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf\": container with ID starting with 4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf not found: ID does not exist" containerID="4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.881398 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf"} err="failed to get container status \"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf\": rpc error: code = NotFound desc = could not find container \"4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf\": container with ID starting with 4da524496897015a2c562ea6cb4e7568152462301e7359dceb2620b588c364bf not found: ID does not exist" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.881462 4735 scope.go:117] "RemoveContainer" containerID="939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e" Jan 31 15:02:12 crc kubenswrapper[4735]: E0131 15:02:12.881840 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e\": container with ID starting with 939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e not found: ID does not exist" containerID="939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.881906 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e"} err="failed to get container status \"939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e\": rpc error: code = NotFound desc = could not find container \"939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e\": container with ID starting with 939baddd2eedb45d3e16492dad7f461f1e91112eec812abbc28152b68b3b7c5e not found: ID does not exist" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.881955 4735 scope.go:117] "RemoveContainer" containerID="202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056" Jan 31 15:02:12 crc kubenswrapper[4735]: E0131 15:02:12.882320 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056\": container with ID starting with 202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056 not found: ID does not exist" containerID="202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.882351 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056"} err="failed to get container status \"202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056\": rpc error: code = NotFound desc = could not find container \"202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056\": container with ID starting with 202464be1236352825dc56860e42e046fb1d0828cdb469575a40568a3f028056 not found: ID does not exist" Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.996823 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:02:12 crc kubenswrapper[4735]: I0131 15:02:12.997495 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqg4l" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="registry-server" containerID="cri-o://d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108" gracePeriod=2 Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.433465 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.537550 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj9bj\" (UniqueName: \"kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj\") pod \"31b1acb2-6a34-434b-9f48-1934d3eda307\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.537682 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities\") pod \"31b1acb2-6a34-434b-9f48-1934d3eda307\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.537716 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content\") pod \"31b1acb2-6a34-434b-9f48-1934d3eda307\" (UID: \"31b1acb2-6a34-434b-9f48-1934d3eda307\") " Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.540409 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities" (OuterVolumeSpecName: "utilities") pod "31b1acb2-6a34-434b-9f48-1934d3eda307" (UID: "31b1acb2-6a34-434b-9f48-1934d3eda307"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.544617 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj" (OuterVolumeSpecName: "kube-api-access-vj9bj") pod "31b1acb2-6a34-434b-9f48-1934d3eda307" (UID: "31b1acb2-6a34-434b-9f48-1934d3eda307"). InnerVolumeSpecName "kube-api-access-vj9bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.550479 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" path="/var/lib/kubelet/pods/8637b2c6-4051-4b0e-967b-c6f4897c9f4a/volumes" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.640469 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.640605 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj9bj\" (UniqueName: \"kubernetes.io/projected/31b1acb2-6a34-434b-9f48-1934d3eda307-kube-api-access-vj9bj\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.729062 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31b1acb2-6a34-434b-9f48-1934d3eda307" (UID: "31b1acb2-6a34-434b-9f48-1934d3eda307"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.742761 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b1acb2-6a34-434b-9f48-1934d3eda307-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.796262 4735 generic.go:334] "Generic (PLEG): container finished" podID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerID="d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108" exitCode=0 Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.796349 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerDied","Data":"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108"} Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.796455 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqg4l" event={"ID":"31b1acb2-6a34-434b-9f48-1934d3eda307","Type":"ContainerDied","Data":"47d771ca3c6b922cbc1c1f09aecdc6d09e8ad820d364f01f8d969d75d0e9ed05"} Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.796412 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqg4l" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.796486 4735 scope.go:117] "RemoveContainer" containerID="d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.836112 4735 scope.go:117] "RemoveContainer" containerID="658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.871718 4735 scope.go:117] "RemoveContainer" containerID="9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.871957 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.877194 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqg4l"] Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.890402 4735 scope.go:117] "RemoveContainer" containerID="d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108" Jan 31 15:02:13 crc kubenswrapper[4735]: E0131 15:02:13.891007 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108\": container with ID starting with d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108 not found: ID does not exist" containerID="d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.891065 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108"} err="failed to get container status \"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108\": rpc error: code = NotFound desc = could not find container \"d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108\": container with ID starting with d62df28059c014a4684bd2c529433f5883f899c8f15877a52de73db10b222108 not found: ID does not exist" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.891102 4735 scope.go:117] "RemoveContainer" containerID="658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90" Jan 31 15:02:13 crc kubenswrapper[4735]: E0131 15:02:13.891563 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90\": container with ID starting with 658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90 not found: ID does not exist" containerID="658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.891610 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90"} err="failed to get container status \"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90\": rpc error: code = NotFound desc = could not find container \"658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90\": container with ID starting with 658003b610e880a78f229d96f165f790de5b07adc812a2196f89b00efd188d90 not found: ID does not exist" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.891637 4735 scope.go:117] "RemoveContainer" containerID="9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501" Jan 31 15:02:13 crc kubenswrapper[4735]: E0131 15:02:13.892033 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501\": container with ID starting with 9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501 not found: ID does not exist" containerID="9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501" Jan 31 15:02:13 crc kubenswrapper[4735]: I0131 15:02:13.892092 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501"} err="failed to get container status \"9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501\": rpc error: code = NotFound desc = could not find container \"9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501\": container with ID starting with 9ef834e1aa42835a7cb68283686bba92e0f9ddef3ac250c355afc5dc43a08501 not found: ID does not exist" Jan 31 15:02:15 crc kubenswrapper[4735]: I0131 15:02:15.559179 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" path="/var/lib/kubelet/pods/31b1acb2-6a34-434b-9f48-1934d3eda307/volumes" Jan 31 15:02:15 crc kubenswrapper[4735]: I0131 15:02:15.663694 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:02:16 crc kubenswrapper[4735]: I0131 15:02:16.111352 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.399277 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.400542 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmx2w" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="registry-server" containerID="cri-o://fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2" gracePeriod=2 Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.715747 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.784371 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.848372 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.849465 4735 generic.go:334] "Generic (PLEG): container finished" podID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerID="fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2" exitCode=0 Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.849509 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerDied","Data":"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2"} Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.849569 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmx2w" event={"ID":"da2ebe13-1049-4de9-9b68-19ebff67ff15","Type":"ContainerDied","Data":"cb38f9ee5bc53e1d715e32b2a5676f7b033f358b2154e4c6b3d1065adae30195"} Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.849599 4735 scope.go:117] "RemoveContainer" containerID="fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.867356 4735 scope.go:117] "RemoveContainer" containerID="aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.895599 4735 scope.go:117] "RemoveContainer" containerID="079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.916956 4735 scope.go:117] "RemoveContainer" containerID="fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2" Jan 31 15:02:18 crc kubenswrapper[4735]: E0131 15:02:18.917605 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2\": container with ID starting with fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2 not found: ID does not exist" containerID="fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.917677 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2"} err="failed to get container status \"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2\": rpc error: code = NotFound desc = could not find container \"fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2\": container with ID starting with fd402895c97310b32e225a7e3b947a2232b9ebb23ecc08b7ecfbb2561d9ae6c2 not found: ID does not exist" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.917722 4735 scope.go:117] "RemoveContainer" containerID="aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7" Jan 31 15:02:18 crc kubenswrapper[4735]: E0131 15:02:18.918272 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7\": container with ID starting with aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7 not found: ID does not exist" containerID="aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.918312 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7"} err="failed to get container status \"aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7\": rpc error: code = NotFound desc = could not find container \"aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7\": container with ID starting with aa9fa228b2e7a55af50e128e068f84da894d4c9d563a0eb94f1badf11ad3f8b7 not found: ID does not exist" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.918340 4735 scope.go:117] "RemoveContainer" containerID="079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5" Jan 31 15:02:18 crc kubenswrapper[4735]: E0131 15:02:18.918928 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5\": container with ID starting with 079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5 not found: ID does not exist" containerID="079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.918992 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5"} err="failed to get container status \"079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5\": rpc error: code = NotFound desc = could not find container \"079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5\": container with ID starting with 079c93f2f3fc164ee0b76f3aa7c89a974f13bbace9089b8016d48d9a1e9727c5 not found: ID does not exist" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.927476 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content\") pod \"da2ebe13-1049-4de9-9b68-19ebff67ff15\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.927574 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities\") pod \"da2ebe13-1049-4de9-9b68-19ebff67ff15\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.927649 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sm7s\" (UniqueName: \"kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s\") pod \"da2ebe13-1049-4de9-9b68-19ebff67ff15\" (UID: \"da2ebe13-1049-4de9-9b68-19ebff67ff15\") " Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.928480 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities" (OuterVolumeSpecName: "utilities") pod "da2ebe13-1049-4de9-9b68-19ebff67ff15" (UID: "da2ebe13-1049-4de9-9b68-19ebff67ff15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.933327 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s" (OuterVolumeSpecName: "kube-api-access-6sm7s") pod "da2ebe13-1049-4de9-9b68-19ebff67ff15" (UID: "da2ebe13-1049-4de9-9b68-19ebff67ff15"). InnerVolumeSpecName "kube-api-access-6sm7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:18 crc kubenswrapper[4735]: I0131 15:02:18.973944 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2ebe13-1049-4de9-9b68-19ebff67ff15" (UID: "da2ebe13-1049-4de9-9b68-19ebff67ff15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.028393 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.028449 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2ebe13-1049-4de9-9b68-19ebff67ff15-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.028466 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sm7s\" (UniqueName: \"kubernetes.io/projected/da2ebe13-1049-4de9-9b68-19ebff67ff15-kube-api-access-6sm7s\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.858617 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmx2w" Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.891184 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:02:19 crc kubenswrapper[4735]: I0131 15:02:19.898898 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmx2w"] Jan 31 15:02:21 crc kubenswrapper[4735]: I0131 15:02:21.555129 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" path="/var/lib/kubelet/pods/da2ebe13-1049-4de9-9b68-19ebff67ff15/volumes" Jan 31 15:02:22 crc kubenswrapper[4735]: I0131 15:02:22.430024 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" podUID="f5835267-03a0-4567-b113-84e6a885af15" containerName="oauth-openshift" containerID="cri-o://92b0fb1592b3bda9d38766e268f9ea97cab522bbe006107c57a88626adc47ac5" gracePeriod=15 Jan 31 15:02:22 crc kubenswrapper[4735]: I0131 15:02:22.884078 4735 generic.go:334] "Generic (PLEG): container finished" podID="f5835267-03a0-4567-b113-84e6a885af15" containerID="92b0fb1592b3bda9d38766e268f9ea97cab522bbe006107c57a88626adc47ac5" exitCode=0 Jan 31 15:02:22 crc kubenswrapper[4735]: I0131 15:02:22.884248 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" event={"ID":"f5835267-03a0-4567-b113-84e6a885af15","Type":"ContainerDied","Data":"92b0fb1592b3bda9d38766e268f9ea97cab522bbe006107c57a88626adc47ac5"} Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.471633 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611287 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611356 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611454 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6qsb\" (UniqueName: \"kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611514 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611578 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611621 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611669 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611708 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611787 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611829 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611871 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611937 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.611986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.612046 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login\") pod \"f5835267-03a0-4567-b113-84e6a885af15\" (UID: \"f5835267-03a0-4567-b113-84e6a885af15\") " Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.612449 4735 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5835267-03a0-4567-b113-84e6a885af15-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.614138 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.614229 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.614365 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.615078 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.619007 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.620020 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb" (OuterVolumeSpecName: "kube-api-access-j6qsb") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "kube-api-access-j6qsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.620121 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.620236 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.622848 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.623205 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.624408 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.630199 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.636152 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f5835267-03a0-4567-b113-84e6a885af15" (UID: "f5835267-03a0-4567-b113-84e6a885af15"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713264 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6qsb\" (UniqueName: \"kubernetes.io/projected/f5835267-03a0-4567-b113-84e6a885af15-kube-api-access-j6qsb\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713319 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713341 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713362 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713381 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713403 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713466 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713492 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713513 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713533 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713554 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713574 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.713593 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f5835267-03a0-4567-b113-84e6a885af15-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.907370 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" event={"ID":"f5835267-03a0-4567-b113-84e6a885af15","Type":"ContainerDied","Data":"691f2fa2e0fc8498d6225a229ce303faa53264df241fb8776043a1830a69622a"} Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.907486 4735 scope.go:117] "RemoveContainer" containerID="92b0fb1592b3bda9d38766e268f9ea97cab522bbe006107c57a88626adc47ac5" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.907727 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qf4bc" Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.956725 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:02:23 crc kubenswrapper[4735]: I0131 15:02:23.962830 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qf4bc"] Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.555400 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5835267-03a0-4567-b113-84e6a885af15" path="/var/lib/kubelet/pods/f5835267-03a0-4567-b113-84e6a885af15/volumes" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.634992 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt"] Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635348 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635374 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635399 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635415 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635667 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635688 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635711 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635726 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635750 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635765 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635792 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635809 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635832 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5835267-03a0-4567-b113-84e6a885af15" containerName="oauth-openshift" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635847 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5835267-03a0-4567-b113-84e6a885af15" containerName="oauth-openshift" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635868 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635883 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635905 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635920 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635944 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635960 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.635982 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.635999 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="extract-content" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.636020 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636036 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: E0131 15:02:25.636063 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636080 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="extract-utilities" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636295 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b1acb2-6a34-434b-9f48-1934d3eda307" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636318 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2ebe13-1049-4de9-9b68-19ebff67ff15" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636338 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f255f7-94e5-4129-adfb-10b19b294eef" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636355 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5835267-03a0-4567-b113-84e6a885af15" containerName="oauth-openshift" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.636369 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8637b2c6-4051-4b0e-967b-c6f4897c9f4a" containerName="registry-server" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.637134 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.643782 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.644229 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.647327 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.648239 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.648957 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.649643 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.650067 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.650291 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.650863 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.651808 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.651998 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.652040 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.720093 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.722392 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt"] Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.723945 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.732879 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748047 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748149 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748186 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-login\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748212 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-session\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74584dee-99f9-41ed-8807-b5eb7189947d-audit-dir\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748320 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748353 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-audit-policies\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748448 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748473 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748500 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-error\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748537 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.748563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4s8\" (UniqueName: \"kubernetes.io/projected/74584dee-99f9-41ed-8807-b5eb7189947d-kube-api-access-cr4s8\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.849770 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.849871 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-audit-policies\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.849914 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.849950 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.849993 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-error\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850049 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4s8\" (UniqueName: \"kubernetes.io/projected/74584dee-99f9-41ed-8807-b5eb7189947d-kube-api-access-cr4s8\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850131 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850183 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850225 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-login\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850269 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850310 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-session\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850373 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74584dee-99f9-41ed-8807-b5eb7189947d-audit-dir\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.850408 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.851307 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74584dee-99f9-41ed-8807-b5eb7189947d-audit-dir\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.852505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-service-ca\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.852547 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.852741 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-audit-policies\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.853150 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.858865 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-router-certs\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.859144 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-error\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.859595 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.859857 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.860373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.860825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.861379 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-user-template-login\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.862781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74584dee-99f9-41ed-8807-b5eb7189947d-v4-0-config-system-session\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:25 crc kubenswrapper[4735]: I0131 15:02:25.883496 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4s8\" (UniqueName: \"kubernetes.io/projected/74584dee-99f9-41ed-8807-b5eb7189947d-kube-api-access-cr4s8\") pod \"oauth-openshift-5566b8fdb8-k5hqt\" (UID: \"74584dee-99f9-41ed-8807-b5eb7189947d\") " pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.023968 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.282140 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt"] Jan 31 15:02:26 crc kubenswrapper[4735]: W0131 15:02:26.297943 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74584dee_99f9_41ed_8807_b5eb7189947d.slice/crio-f9c487214874d4ebf46d64548b8454aa364537b90f1fbad3aa7b0d315b51eff2 WatchSource:0}: Error finding container f9c487214874d4ebf46d64548b8454aa364537b90f1fbad3aa7b0d315b51eff2: Status 404 returned error can't find the container with id f9c487214874d4ebf46d64548b8454aa364537b90f1fbad3aa7b0d315b51eff2 Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.934703 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" event={"ID":"74584dee-99f9-41ed-8807-b5eb7189947d","Type":"ContainerStarted","Data":"cfafe1881f302e0ed5d9661e84742d33f35e77786a046a4a113fa9e2511ad5ea"} Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.934992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" event={"ID":"74584dee-99f9-41ed-8807-b5eb7189947d","Type":"ContainerStarted","Data":"f9c487214874d4ebf46d64548b8454aa364537b90f1fbad3aa7b0d315b51eff2"} Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.935203 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.941335 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" Jan 31 15:02:26 crc kubenswrapper[4735]: I0131 15:02:26.977153 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5566b8fdb8-k5hqt" podStartSLOduration=29.977128812 podStartE2EDuration="29.977128812s" podCreationTimestamp="2026-01-31 15:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:26.973827448 +0000 UTC m=+232.747156530" watchObservedRunningTime="2026-01-31 15:02:26.977128812 +0000 UTC m=+232.750457894" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.200148 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.201145 4735 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.201363 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.201744 4735 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202416 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc" gracePeriod=15 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202485 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd" gracePeriod=15 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202500 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1" gracePeriod=15 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202529 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a" gracePeriod=15 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202532 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520" gracePeriod=15 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.202981 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203390 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203414 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203462 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203473 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203486 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203495 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203511 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203520 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203532 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203540 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203553 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203561 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203572 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203580 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 15:02:30 crc kubenswrapper[4735]: E0131 15:02:30.203592 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203600 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203741 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203766 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203780 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203793 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203805 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203819 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.203836 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.325773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326089 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326153 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326183 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326207 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326621 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.326792 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428405 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428524 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428551 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428589 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428595 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428648 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428687 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428694 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428696 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428754 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428641 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428882 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428954 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.428954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.429004 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.963578 4735 generic.go:334] "Generic (PLEG): container finished" podID="3d548798-4069-410a-816b-3aea3ff905c6" containerID="ac97c3e5382adfec694e29f439988ff1dd0d9e298868aa4f33f7b4d2e0db6a78" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.963695 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d548798-4069-410a-816b-3aea3ff905c6","Type":"ContainerDied","Data":"ac97c3e5382adfec694e29f439988ff1dd0d9e298868aa4f33f7b4d2e0db6a78"} Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.965017 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.965650 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.967463 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.969748 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.971079 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.971114 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.971129 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.971146 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520" exitCode=2 Jan 31 15:02:30 crc kubenswrapper[4735]: I0131 15:02:30.971200 4735 scope.go:117] "RemoveContainer" containerID="c28b8d90760f9dc4e02bd6a2f64b617430e4a417de226ef5d4f6a8fe7454ed79" Jan 31 15:02:31 crc kubenswrapper[4735]: I0131 15:02:31.983961 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.288719 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.289275 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.358914 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock\") pod \"3d548798-4069-410a-816b-3aea3ff905c6\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359227 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access\") pod \"3d548798-4069-410a-816b-3aea3ff905c6\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359010 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d548798-4069-410a-816b-3aea3ff905c6" (UID: "3d548798-4069-410a-816b-3aea3ff905c6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359269 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir\") pod \"3d548798-4069-410a-816b-3aea3ff905c6\" (UID: \"3d548798-4069-410a-816b-3aea3ff905c6\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359452 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d548798-4069-410a-816b-3aea3ff905c6" (UID: "3d548798-4069-410a-816b-3aea3ff905c6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359860 4735 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.359882 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d548798-4069-410a-816b-3aea3ff905c6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.365971 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d548798-4069-410a-816b-3aea3ff905c6" (UID: "3d548798-4069-410a-816b-3aea3ff905c6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.460934 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d548798-4069-410a-816b-3aea3ff905c6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.624559 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.625680 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.626301 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.626882 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.764917 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765026 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765024 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765151 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765260 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765586 4735 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765613 4735 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.765630 4735 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.993282 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.994615 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc" exitCode=0 Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.994737 4735 scope.go:117] "RemoveContainer" containerID="c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.994745 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.997910 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d548798-4069-410a-816b-3aea3ff905c6","Type":"ContainerDied","Data":"469d292cd6ed841ed65dee8501ba8b196632a868d06f83347a2583c43aa9b667"} Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.997955 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469d292cd6ed841ed65dee8501ba8b196632a868d06f83347a2583c43aa9b667" Jan 31 15:02:32 crc kubenswrapper[4735]: I0131 15:02:32.997992 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.008518 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.008724 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.012209 4735 scope.go:117] "RemoveContainer" containerID="1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.020808 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.021178 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.025143 4735 scope.go:117] "RemoveContainer" containerID="af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.037810 4735 scope.go:117] "RemoveContainer" containerID="03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.049825 4735 scope.go:117] "RemoveContainer" containerID="8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.063567 4735 scope.go:117] "RemoveContainer" containerID="6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.080806 4735 scope.go:117] "RemoveContainer" containerID="c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.081108 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\": container with ID starting with c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd not found: ID does not exist" containerID="c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081151 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd"} err="failed to get container status \"c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\": rpc error: code = NotFound desc = could not find container \"c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd\": container with ID starting with c018416a0bab7ee078ea9387285b2f9a4a24334932862aff9749c253a5f021fd not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081184 4735 scope.go:117] "RemoveContainer" containerID="1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.081623 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\": container with ID starting with 1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a not found: ID does not exist" containerID="1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081649 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a"} err="failed to get container status \"1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\": rpc error: code = NotFound desc = could not find container \"1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a\": container with ID starting with 1e6efc5fafafbb5e6a69ac7880426cd0e0a95bb3bc55f43d27725df068d4bd2a not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081665 4735 scope.go:117] "RemoveContainer" containerID="af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.081882 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\": container with ID starting with af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1 not found: ID does not exist" containerID="af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081907 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1"} err="failed to get container status \"af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\": rpc error: code = NotFound desc = could not find container \"af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1\": container with ID starting with af07676f8ce3a87922cda70ba2df078c6186a10067d84729f9e500b34d8f6dd1 not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.081922 4735 scope.go:117] "RemoveContainer" containerID="03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.082161 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\": container with ID starting with 03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520 not found: ID does not exist" containerID="03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.082181 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520"} err="failed to get container status \"03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\": rpc error: code = NotFound desc = could not find container \"03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520\": container with ID starting with 03c77d53b11396ad107c2c91e105bf2e08c96dc1f131e68263a96f36f47fa520 not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.082193 4735 scope.go:117] "RemoveContainer" containerID="8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.082500 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\": container with ID starting with 8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc not found: ID does not exist" containerID="8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.082548 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc"} err="failed to get container status \"8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\": rpc error: code = NotFound desc = could not find container \"8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc\": container with ID starting with 8aca1e80839d115336da2c761dd2789fc225c0726132f673e28aed92f5642dcc not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.082586 4735 scope.go:117] "RemoveContainer" containerID="6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd" Jan 31 15:02:33 crc kubenswrapper[4735]: E0131 15:02:33.082898 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\": container with ID starting with 6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd not found: ID does not exist" containerID="6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.082923 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd"} err="failed to get container status \"6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\": rpc error: code = NotFound desc = could not find container \"6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd\": container with ID starting with 6e2fbb99a65661f161765b78f0624a99df321876ea827b54b257947512f42bbd not found: ID does not exist" Jan 31 15:02:33 crc kubenswrapper[4735]: I0131 15:02:33.554803 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 15:02:35 crc kubenswrapper[4735]: E0131 15:02:35.258971 4735 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:35 crc kubenswrapper[4735]: I0131 15:02:35.260478 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:35 crc kubenswrapper[4735]: W0131 15:02:35.289288 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8b16aa2674d0319aff61c7db56b7fab4cec73c1134f5ef93a80b45017406b838 WatchSource:0}: Error finding container 8b16aa2674d0319aff61c7db56b7fab4cec73c1134f5ef93a80b45017406b838: Status 404 returned error can't find the container with id 8b16aa2674d0319aff61c7db56b7fab4cec73c1134f5ef93a80b45017406b838 Jan 31 15:02:35 crc kubenswrapper[4735]: E0131 15:02:35.296912 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.241:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8fbf83c114a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 15:02:35.292971338 +0000 UTC m=+241.066300400,LastTimestamp:2026-01-31 15:02:35.292971338 +0000 UTC m=+241.066300400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 15:02:35 crc kubenswrapper[4735]: I0131 15:02:35.542443 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:36 crc kubenswrapper[4735]: I0131 15:02:36.019520 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"86287b666fb234e2c537ed34acd9c4be160edcf92d57cbb9c2326176b607b9d5"} Jan 31 15:02:36 crc kubenswrapper[4735]: I0131 15:02:36.019867 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8b16aa2674d0319aff61c7db56b7fab4cec73c1134f5ef93a80b45017406b838"} Jan 31 15:02:36 crc kubenswrapper[4735]: E0131 15:02:36.020523 4735 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:02:36 crc kubenswrapper[4735]: I0131 15:02:36.021248 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.035772 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.037846 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.038684 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.039127 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.039579 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:37 crc kubenswrapper[4735]: I0131 15:02:37.039615 4735 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.039913 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="200ms" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.241759 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="400ms" Jan 31 15:02:37 crc kubenswrapper[4735]: E0131 15:02:37.642842 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="800ms" Jan 31 15:02:38 crc kubenswrapper[4735]: E0131 15:02:38.444132 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="1.6s" Jan 31 15:02:40 crc kubenswrapper[4735]: E0131 15:02:40.048072 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.241:6443: connect: connection refused" interval="3.2s" Jan 31 15:02:41 crc kubenswrapper[4735]: I0131 15:02:41.539664 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:41 crc kubenswrapper[4735]: I0131 15:02:41.540697 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:41 crc kubenswrapper[4735]: I0131 15:02:41.560084 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:41 crc kubenswrapper[4735]: I0131 15:02:41.560136 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:41 crc kubenswrapper[4735]: E0131 15:02:41.560719 4735 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:41 crc kubenswrapper[4735]: I0131 15:02:41.561452 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:41 crc kubenswrapper[4735]: W0131 15:02:41.583712 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-28016f78eef9519e5cf4d1c6baa55b9b3de3cdec15e87187b95a7529abf356c7 WatchSource:0}: Error finding container 28016f78eef9519e5cf4d1c6baa55b9b3de3cdec15e87187b95a7529abf356c7: Status 404 returned error can't find the container with id 28016f78eef9519e5cf4d1c6baa55b9b3de3cdec15e87187b95a7529abf356c7 Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.068248 4735 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6c292a001fa88ee40b3c360242c8e9693e648e408b2d4f2a165f1c9792379553" exitCode=0 Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.068314 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6c292a001fa88ee40b3c360242c8e9693e648e408b2d4f2a165f1c9792379553"} Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.068413 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"28016f78eef9519e5cf4d1c6baa55b9b3de3cdec15e87187b95a7529abf356c7"} Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.068786 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.068811 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:42 crc kubenswrapper[4735]: E0131 15:02:42.069212 4735 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:42 crc kubenswrapper[4735]: I0131 15:02:42.069629 4735 status_manager.go:851] "Failed to get status for pod" podUID="3d548798-4069-410a-816b-3aea3ff905c6" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.241:6443: connect: connection refused" Jan 31 15:02:43 crc kubenswrapper[4735]: I0131 15:02:43.075862 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c582e5ab39ef0fa38053465ffbdad28787b23308ea790da4981a8cb26e5197e"} Jan 31 15:02:43 crc kubenswrapper[4735]: I0131 15:02:43.076188 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8f607a84d36d5ee9fab99d9305092cb5a3f3daf1429f1b9d7b601c768f4592f"} Jan 31 15:02:43 crc kubenswrapper[4735]: I0131 15:02:43.076236 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6a2db77c2be4af1839b25e96f4d7ac955a687c1989a9e88a83d467a45f939ed9"} Jan 31 15:02:44 crc kubenswrapper[4735]: I0131 15:02:44.084562 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29c19227679a79b69e2b0b3f52d0f27289f7b2dd3871eb618a0c85925fcbd072"} Jan 31 15:02:44 crc kubenswrapper[4735]: I0131 15:02:44.084926 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fdd43339077d157bed9e8fe4b257a5424200e5305e1a07c0eb26e99330e7e028"} Jan 31 15:02:44 crc kubenswrapper[4735]: I0131 15:02:44.084936 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:44 crc kubenswrapper[4735]: I0131 15:02:44.084969 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:44 crc kubenswrapper[4735]: I0131 15:02:44.084981 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:45 crc kubenswrapper[4735]: I0131 15:02:45.092910 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 15:02:45 crc kubenswrapper[4735]: I0131 15:02:45.092971 4735 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d" exitCode=1 Jan 31 15:02:45 crc kubenswrapper[4735]: I0131 15:02:45.093007 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d"} Jan 31 15:02:45 crc kubenswrapper[4735]: I0131 15:02:45.093638 4735 scope.go:117] "RemoveContainer" containerID="467a2c0283178cc06c1def4cae30dc9346d69d06340a01156220cc8701d9f82d" Jan 31 15:02:45 crc kubenswrapper[4735]: I0131 15:02:45.197382 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 15:02:46 crc kubenswrapper[4735]: I0131 15:02:46.102525 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 15:02:46 crc kubenswrapper[4735]: I0131 15:02:46.102858 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f200e82d11359b1a6d64edf367422915437f915006308980bea19ee03714ffec"} Jan 31 15:02:46 crc kubenswrapper[4735]: I0131 15:02:46.562462 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:46 crc kubenswrapper[4735]: I0131 15:02:46.562521 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:46 crc kubenswrapper[4735]: I0131 15:02:46.569456 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:49 crc kubenswrapper[4735]: I0131 15:02:49.095111 4735 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:49 crc kubenswrapper[4735]: I0131 15:02:49.123024 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:49 crc kubenswrapper[4735]: I0131 15:02:49.123055 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:49 crc kubenswrapper[4735]: I0131 15:02:49.135081 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:02:49 crc kubenswrapper[4735]: I0131 15:02:49.140408 4735 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeceb772-ea60-4e90-9a1d-5935ac0d68ff" Jan 31 15:02:50 crc kubenswrapper[4735]: I0131 15:02:50.129285 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:50 crc kubenswrapper[4735]: I0131 15:02:50.129745 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de7d0b76-1231-4967-92b1-cea53a7a8492" Jan 31 15:02:50 crc kubenswrapper[4735]: I0131 15:02:50.132348 4735 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aeceb772-ea60-4e90-9a1d-5935ac0d68ff" Jan 31 15:02:51 crc kubenswrapper[4735]: I0131 15:02:51.261997 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 15:02:51 crc kubenswrapper[4735]: I0131 15:02:51.262583 4735 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 15:02:51 crc kubenswrapper[4735]: I0131 15:02:51.262679 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 15:02:55 crc kubenswrapper[4735]: I0131 15:02:55.196756 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 15:02:56 crc kubenswrapper[4735]: I0131 15:02:56.504913 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 15:02:57 crc kubenswrapper[4735]: I0131 15:02:57.985632 4735 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 15:02:59 crc kubenswrapper[4735]: I0131 15:02:59.065679 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 15:02:59 crc kubenswrapper[4735]: I0131 15:02:59.242557 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 15:02:59 crc kubenswrapper[4735]: I0131 15:02:59.871376 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 15:03:00 crc kubenswrapper[4735]: I0131 15:03:00.795611 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.230283 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.267123 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.273020 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.305856 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.333899 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.364362 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.520591 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.694953 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.711461 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.814721 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 15:03:01 crc kubenswrapper[4735]: I0131 15:03:01.923040 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.215198 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.216338 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.450057 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.556396 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.770076 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.787945 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.796612 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.866050 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.931253 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 15:03:02 crc kubenswrapper[4735]: I0131 15:03:02.950043 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.327836 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.376084 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.428935 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.621077 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.768128 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.875879 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 15:03:03 crc kubenswrapper[4735]: I0131 15:03:03.982139 4735 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.036585 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.075842 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.104908 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.166170 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.233954 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.316588 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.358379 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.366322 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.377495 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.511626 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.601217 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.630018 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.680230 4735 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.722010 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.754697 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.854196 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 15:03:04 crc kubenswrapper[4735]: I0131 15:03:04.902772 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.053649 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.185225 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.366733 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.404456 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.423279 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.476128 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.538894 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.653014 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.666665 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.716165 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.792177 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.811834 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 15:03:05 crc kubenswrapper[4735]: I0131 15:03:05.945238 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.028100 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.042810 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.109977 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.113604 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.136581 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.150396 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.522558 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.621060 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.670844 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.731811 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.835231 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.919483 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.956330 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.961749 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 15:03:06 crc kubenswrapper[4735]: I0131 15:03:06.972171 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.098791 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.101999 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.142498 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.246821 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.421265 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.428227 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.453649 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.504288 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.510175 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.555204 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.614917 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.690584 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.810207 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.832495 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.845494 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 15:03:07 crc kubenswrapper[4735]: I0131 15:03:07.875741 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.005022 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.062340 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.146826 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.277216 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.288640 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.364685 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.366289 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.376857 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.455827 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.508999 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.520801 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.540044 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.586273 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.614119 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.638975 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.712213 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.724461 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.732338 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.843887 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.882785 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.909322 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.922585 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 15:03:08 crc kubenswrapper[4735]: I0131 15:03:08.944948 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.051826 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.151785 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.196566 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.315784 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.320050 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.372756 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.376199 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.427740 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.443540 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.447732 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.463774 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.502377 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.553789 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.590409 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.611331 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.611875 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.652288 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.734349 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 15:03:09 crc kubenswrapper[4735]: I0131 15:03:09.896564 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.053973 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.130676 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.234266 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.282634 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.285726 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.288320 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.289413 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.291198 4735 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.494234 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.540590 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.657299 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.718292 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.731890 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.771400 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.825652 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.865077 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.875358 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 15:03:10 crc kubenswrapper[4735]: I0131 15:03:10.945723 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.249910 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.273164 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.316624 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.346035 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.351070 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.401005 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.410045 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.492029 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.499602 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.526806 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.592963 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.699274 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.737609 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.790160 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.919954 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 15:03:11 crc kubenswrapper[4735]: I0131 15:03:11.923267 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.104136 4735 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.111893 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.111983 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.112022 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgtqr","openshift-marketplace/redhat-marketplace-j55nq","openshift-marketplace/marketplace-operator-79b997595-djr57","openshift-marketplace/certified-operators-vk2gn","openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.112475 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkdfl" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="registry-server" containerID="cri-o://9e6540baea7826cdf1168bf929bdd646dbd65fe517c703930568edc9419d40d3" gracePeriod=30 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.112842 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j55nq" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="registry-server" containerID="cri-o://6cedb157762cef28340373b5b54fcf878555f6827da4f9e734cf26602d6e08c0" gracePeriod=30 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.113224 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" containerID="cri-o://cd24137ade04d71a3993ea9176f211e474fe48ac64f1d6415910c861a3222401" gracePeriod=30 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.113454 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgtqr" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="registry-server" containerID="cri-o://ced793d42a37a6b8b6426cb0194ffdbb5820514a8bf5214dea6940388e219498" gracePeriod=30 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.113642 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vk2gn" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="registry-server" containerID="cri-o://946a2bfbf1d419294bf7fee3f36b37909a72b51d9489d74f30a7ee9792c26a32" gracePeriod=30 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.162213 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.162192103 podStartE2EDuration="23.162192103s" podCreationTimestamp="2026-01-31 15:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:12.155246345 +0000 UTC m=+277.928575437" watchObservedRunningTime="2026-01-31 15:03:12.162192103 +0000 UTC m=+277.935521145" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.170793 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.218829 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.265996 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.271951 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.281771 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.282686 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.282918 4735 generic.go:334] "Generic (PLEG): container finished" podID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerID="9e6540baea7826cdf1168bf929bdd646dbd65fe517c703930568edc9419d40d3" exitCode=0 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.282975 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerDied","Data":"9e6540baea7826cdf1168bf929bdd646dbd65fe517c703930568edc9419d40d3"} Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.292123 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.313123 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.322905 4735 generic.go:334] "Generic (PLEG): container finished" podID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerID="6cedb157762cef28340373b5b54fcf878555f6827da4f9e734cf26602d6e08c0" exitCode=0 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.323003 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerDied","Data":"6cedb157762cef28340373b5b54fcf878555f6827da4f9e734cf26602d6e08c0"} Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.351021 4735 generic.go:334] "Generic (PLEG): container finished" podID="ffc668ac-7281-4425-8430-529b4e476483" containerID="cd24137ade04d71a3993ea9176f211e474fe48ac64f1d6415910c861a3222401" exitCode=0 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.351088 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" event={"ID":"ffc668ac-7281-4425-8430-529b4e476483","Type":"ContainerDied","Data":"cd24137ade04d71a3993ea9176f211e474fe48ac64f1d6415910c861a3222401"} Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.353975 4735 generic.go:334] "Generic (PLEG): container finished" podID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerID="ced793d42a37a6b8b6426cb0194ffdbb5820514a8bf5214dea6940388e219498" exitCode=0 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.354056 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerDied","Data":"ced793d42a37a6b8b6426cb0194ffdbb5820514a8bf5214dea6940388e219498"} Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.355626 4735 generic.go:334] "Generic (PLEG): container finished" podID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerID="946a2bfbf1d419294bf7fee3f36b37909a72b51d9489d74f30a7ee9792c26a32" exitCode=0 Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.355667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerDied","Data":"946a2bfbf1d419294bf7fee3f36b37909a72b51d9489d74f30a7ee9792c26a32"} Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.374441 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.385113 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.436474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.481691 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.597715 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.612075 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.621489 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.625789 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.644723 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.666311 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672369 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities\") pod \"71e12294-d2e7-417f-a2fc-376e10c34b08\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672409 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content\") pod \"61ce2f8d-6ee0-4909-833c-72b84f64df15\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672444 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca\") pod \"ffc668ac-7281-4425-8430-529b4e476483\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672475 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities\") pod \"61ce2f8d-6ee0-4909-833c-72b84f64df15\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672493 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq8c5\" (UniqueName: \"kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5\") pod \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics\") pod \"ffc668ac-7281-4425-8430-529b4e476483\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672537 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxbww\" (UniqueName: \"kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww\") pod \"ffc668ac-7281-4425-8430-529b4e476483\" (UID: \"ffc668ac-7281-4425-8430-529b4e476483\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672562 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content\") pod \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672583 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sfqq\" (UniqueName: \"kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq\") pod \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672601 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gng\" (UniqueName: \"kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng\") pod \"61ce2f8d-6ee0-4909-833c-72b84f64df15\" (UID: \"61ce2f8d-6ee0-4909-833c-72b84f64df15\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672618 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g76w\" (UniqueName: \"kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w\") pod \"71e12294-d2e7-417f-a2fc-376e10c34b08\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672636 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content\") pod \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672654 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content\") pod \"71e12294-d2e7-417f-a2fc-376e10c34b08\" (UID: \"71e12294-d2e7-417f-a2fc-376e10c34b08\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672674 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities\") pod \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\" (UID: \"013c1a0b-77d5-46f3-b90a-a6df449db6a7\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.672693 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities\") pod \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\" (UID: \"ec5adbe8-b11c-4f8d-8689-b1034c9436ba\") " Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.673868 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities" (OuterVolumeSpecName: "utilities") pod "ec5adbe8-b11c-4f8d-8689-b1034c9436ba" (UID: "ec5adbe8-b11c-4f8d-8689-b1034c9436ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.674461 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities" (OuterVolumeSpecName: "utilities") pod "71e12294-d2e7-417f-a2fc-376e10c34b08" (UID: "71e12294-d2e7-417f-a2fc-376e10c34b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.679296 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ffc668ac-7281-4425-8430-529b4e476483" (UID: "ffc668ac-7281-4425-8430-529b4e476483"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.679528 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities" (OuterVolumeSpecName: "utilities") pod "61ce2f8d-6ee0-4909-833c-72b84f64df15" (UID: "61ce2f8d-6ee0-4909-833c-72b84f64df15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.681500 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities" (OuterVolumeSpecName: "utilities") pod "013c1a0b-77d5-46f3-b90a-a6df449db6a7" (UID: "013c1a0b-77d5-46f3-b90a-a6df449db6a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.683080 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5" (OuterVolumeSpecName: "kube-api-access-kq8c5") pod "ec5adbe8-b11c-4f8d-8689-b1034c9436ba" (UID: "ec5adbe8-b11c-4f8d-8689-b1034c9436ba"). InnerVolumeSpecName "kube-api-access-kq8c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.685746 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng" (OuterVolumeSpecName: "kube-api-access-j8gng") pod "61ce2f8d-6ee0-4909-833c-72b84f64df15" (UID: "61ce2f8d-6ee0-4909-833c-72b84f64df15"). InnerVolumeSpecName "kube-api-access-j8gng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.685770 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq" (OuterVolumeSpecName: "kube-api-access-8sfqq") pod "013c1a0b-77d5-46f3-b90a-a6df449db6a7" (UID: "013c1a0b-77d5-46f3-b90a-a6df449db6a7"). InnerVolumeSpecName "kube-api-access-8sfqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.685802 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w" (OuterVolumeSpecName: "kube-api-access-7g76w") pod "71e12294-d2e7-417f-a2fc-376e10c34b08" (UID: "71e12294-d2e7-417f-a2fc-376e10c34b08"). InnerVolumeSpecName "kube-api-access-7g76w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.686090 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ffc668ac-7281-4425-8430-529b4e476483" (UID: "ffc668ac-7281-4425-8430-529b4e476483"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.687571 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww" (OuterVolumeSpecName: "kube-api-access-hxbww") pod "ffc668ac-7281-4425-8430-529b4e476483" (UID: "ffc668ac-7281-4425-8430-529b4e476483"). InnerVolumeSpecName "kube-api-access-hxbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.690198 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.690301 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.697166 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.702828 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61ce2f8d-6ee0-4909-833c-72b84f64df15" (UID: "61ce2f8d-6ee0-4909-833c-72b84f64df15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.729071 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec5adbe8-b11c-4f8d-8689-b1034c9436ba" (UID: "ec5adbe8-b11c-4f8d-8689-b1034c9436ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.735997 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "013c1a0b-77d5-46f3-b90a-a6df449db6a7" (UID: "013c1a0b-77d5-46f3-b90a-a6df449db6a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.762350 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774133 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffc668ac-7281-4425-8430-529b4e476483-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774287 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774368 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq8c5\" (UniqueName: \"kubernetes.io/projected/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-kube-api-access-kq8c5\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774452 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffc668ac-7281-4425-8430-529b4e476483-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774526 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxbww\" (UniqueName: \"kubernetes.io/projected/ffc668ac-7281-4425-8430-529b4e476483-kube-api-access-hxbww\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774605 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774664 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sfqq\" (UniqueName: \"kubernetes.io/projected/013c1a0b-77d5-46f3-b90a-a6df449db6a7-kube-api-access-8sfqq\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774721 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gng\" (UniqueName: \"kubernetes.io/projected/61ce2f8d-6ee0-4909-833c-72b84f64df15-kube-api-access-j8gng\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774785 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g76w\" (UniqueName: \"kubernetes.io/projected/71e12294-d2e7-417f-a2fc-376e10c34b08-kube-api-access-7g76w\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774846 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774910 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013c1a0b-77d5-46f3-b90a-a6df449db6a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.774985 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5adbe8-b11c-4f8d-8689-b1034c9436ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.775046 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.775100 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce2f8d-6ee0-4909-833c-72b84f64df15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.784272 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.799586 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.838582 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71e12294-d2e7-417f-a2fc-376e10c34b08" (UID: "71e12294-d2e7-417f-a2fc-376e10c34b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.870237 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.876315 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e12294-d2e7-417f-a2fc-376e10c34b08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.885657 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 15:03:12 crc kubenswrapper[4735]: I0131 15:03:12.959111 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.013089 4735 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.051975 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.076798 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.206274 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.253439 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.290966 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.332304 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.365065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk2gn" event={"ID":"ec5adbe8-b11c-4f8d-8689-b1034c9436ba","Type":"ContainerDied","Data":"1ba8d05f1b00e1ea323c1694ab3f510d50491a840c3bd4bc090ca0ea0e80b439"} Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.365112 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk2gn" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.365309 4735 scope.go:117] "RemoveContainer" containerID="946a2bfbf1d419294bf7fee3f36b37909a72b51d9489d74f30a7ee9792c26a32" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.369769 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkdfl" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.369817 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkdfl" event={"ID":"71e12294-d2e7-417f-a2fc-376e10c34b08","Type":"ContainerDied","Data":"71948e41600f84da1d61138b108746f2c8663f2367be6075b4b532cd134faead"} Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.377571 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j55nq" event={"ID":"61ce2f8d-6ee0-4909-833c-72b84f64df15","Type":"ContainerDied","Data":"991ecd63838a89b7766f81eb25e35c256827ef961ea91a96ff580ec3b04219d1"} Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.377598 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j55nq" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.379649 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" event={"ID":"ffc668ac-7281-4425-8430-529b4e476483","Type":"ContainerDied","Data":"fb9e5d285b7926414ec11f200381a82e801e9c4fb8cf955dc470305ee2aba8e2"} Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.379756 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-djr57" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.388069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgtqr" event={"ID":"013c1a0b-77d5-46f3-b90a-a6df449db6a7","Type":"ContainerDied","Data":"1fd3d70b368097e681a90261a5427112033a3a44f470666c0ad131e0818ac5b3"} Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.388242 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgtqr" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.396372 4735 scope.go:117] "RemoveContainer" containerID="67d44ede0d62d8fc240da1661558a443f2926d8ca59f6b7de6e3a4510f088d97" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.432545 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djr57"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.437361 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-djr57"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.448881 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j55nq"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.454791 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j55nq"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.460795 4735 scope.go:117] "RemoveContainer" containerID="2b4bbd6c0537d326651139cfe27a4259013bb6cf9b96aa51727765443037c373" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.462453 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vk2gn"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.467565 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vk2gn"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.481189 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.482491 4735 scope.go:117] "RemoveContainer" containerID="9e6540baea7826cdf1168bf929bdd646dbd65fe517c703930568edc9419d40d3" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.492988 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkdfl"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.502886 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgtqr"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.504170 4735 scope.go:117] "RemoveContainer" containerID="8f7e43ac28c681450cca8bb8a3f760d1736a4e91a625a483cb3c96913e7d82e0" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.510218 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgtqr"] Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.528210 4735 scope.go:117] "RemoveContainer" containerID="2f01e81f4bb9ce58992e35a0fb6ef1e9fd1e4bf2012eb78d693e68d5be7153e0" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.545986 4735 scope.go:117] "RemoveContainer" containerID="6cedb157762cef28340373b5b54fcf878555f6827da4f9e734cf26602d6e08c0" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.558450 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.562323 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" path="/var/lib/kubelet/pods/013c1a0b-77d5-46f3-b90a-a6df449db6a7/volumes" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.562996 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.565535 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" path="/var/lib/kubelet/pods/61ce2f8d-6ee0-4909-833c-72b84f64df15/volumes" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.566254 4735 scope.go:117] "RemoveContainer" containerID="87721be408ff6d34947e4ec76966ab6598c2b3516b9f21b4e6af0ae4549e2523" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.567177 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" path="/var/lib/kubelet/pods/71e12294-d2e7-417f-a2fc-376e10c34b08/volumes" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.569330 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" path="/var/lib/kubelet/pods/ec5adbe8-b11c-4f8d-8689-b1034c9436ba/volumes" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.572734 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc668ac-7281-4425-8430-529b4e476483" path="/var/lib/kubelet/pods/ffc668ac-7281-4425-8430-529b4e476483/volumes" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.584920 4735 scope.go:117] "RemoveContainer" containerID="0e6117b46d6144d1b17715dd06009511960a88475f0d537679e3d63b77167985" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.605840 4735 scope.go:117] "RemoveContainer" containerID="cd24137ade04d71a3993ea9176f211e474fe48ac64f1d6415910c861a3222401" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.621138 4735 scope.go:117] "RemoveContainer" containerID="ced793d42a37a6b8b6426cb0194ffdbb5820514a8bf5214dea6940388e219498" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.635344 4735 scope.go:117] "RemoveContainer" containerID="00d60ba7abfe3335a12529a62691837098247f9adc325c6985407b46f868a3f3" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.652838 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.657187 4735 scope.go:117] "RemoveContainer" containerID="1709dcfd2c2c942af6eeeb1acede9452dead35fa6ec51c0e47ff1fe3c7466b94" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.724405 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.731473 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.892321 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 15:03:13 crc kubenswrapper[4735]: I0131 15:03:13.952281 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.030902 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.142744 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.145844 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.184708 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.245739 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.483004 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.536873 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.555145 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.652970 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.680349 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.711533 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 15:03:14 crc kubenswrapper[4735]: I0131 15:03:14.976687 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.245416 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.310328 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.402453 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.491020 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.620194 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 15:03:15 crc kubenswrapper[4735]: I0131 15:03:15.946656 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.071327 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.095280 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.299615 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.343471 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.346592 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.369379 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.480724 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.583486 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.636943 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.646283 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.675082 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.680774 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.789244 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.839915 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5l67c"] Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840128 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840141 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840149 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840155 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840164 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840171 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840181 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840189 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840198 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840204 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840212 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d548798-4069-410a-816b-3aea3ff905c6" containerName="installer" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840218 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d548798-4069-410a-816b-3aea3ff905c6" containerName="installer" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840227 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840232 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840246 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840253 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840261 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840268 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="extract-utilities" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840280 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840286 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840296 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840302 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840309 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840314 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840324 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840331 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="extract-content" Jan 31 15:03:16 crc kubenswrapper[4735]: E0131 15:03:16.840343 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840350 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840463 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="013c1a0b-77d5-46f3-b90a-a6df449db6a7" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840475 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce2f8d-6ee0-4909-833c-72b84f64df15" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840486 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5adbe8-b11c-4f8d-8689-b1034c9436ba" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840495 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e12294-d2e7-417f-a2fc-376e10c34b08" containerName="registry-server" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840504 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d548798-4069-410a-816b-3aea3ff905c6" containerName="installer" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840512 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc668ac-7281-4425-8430-529b4e476483" containerName="marketplace-operator" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.840867 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.842809 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.844051 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.844411 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.845764 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.851194 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.863563 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5l67c"] Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.946622 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 15:03:16 crc kubenswrapper[4735]: I0131 15:03:16.949831 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.035888 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.035944 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.036057 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldbc\" (UniqueName: \"kubernetes.io/projected/26db5009-aa32-4023-88bf-05ba79d4d907-kube-api-access-dldbc\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.048563 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.137535 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dldbc\" (UniqueName: \"kubernetes.io/projected/26db5009-aa32-4023-88bf-05ba79d4d907-kube-api-access-dldbc\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.137587 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.137608 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.139037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.149126 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/26db5009-aa32-4023-88bf-05ba79d4d907-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.153825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dldbc\" (UniqueName: \"kubernetes.io/projected/26db5009-aa32-4023-88bf-05ba79d4d907-kube-api-access-dldbc\") pod \"marketplace-operator-79b997595-5l67c\" (UID: \"26db5009-aa32-4023-88bf-05ba79d4d907\") " pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.155718 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.365043 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5l67c"] Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.444028 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" event={"ID":"26db5009-aa32-4023-88bf-05ba79d4d907","Type":"ContainerStarted","Data":"9e173bb796314b5ad2d14ce21f0e42f3eb17e59cb9f3f11f57d9458368f01afa"} Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.471329 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.510244 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.571662 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.649294 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.712953 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 15:03:17 crc kubenswrapper[4735]: I0131 15:03:17.801261 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.450027 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" event={"ID":"26db5009-aa32-4023-88bf-05ba79d4d907","Type":"ContainerStarted","Data":"01dd028fb4cc8b226a20c8f1049c52f351257a952c8ba1f6ed852d18afa7db4d"} Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.451470 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.486373 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.498521 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5l67c" podStartSLOduration=2.498498149 podStartE2EDuration="2.498498149s" podCreationTimestamp="2026-01-31 15:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:18.497786879 +0000 UTC m=+284.271115931" watchObservedRunningTime="2026-01-31 15:03:18.498498149 +0000 UTC m=+284.271827201" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.660409 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.813026 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 15:03:18 crc kubenswrapper[4735]: I0131 15:03:18.873994 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 15:03:19 crc kubenswrapper[4735]: I0131 15:03:19.346505 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 15:03:21 crc kubenswrapper[4735]: I0131 15:03:21.567176 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 15:03:23 crc kubenswrapper[4735]: I0131 15:03:23.019737 4735 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 15:03:23 crc kubenswrapper[4735]: I0131 15:03:23.020280 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://86287b666fb234e2c537ed34acd9c4be160edcf92d57cbb9c2326176b607b9d5" gracePeriod=5 Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.499285 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.499936 4735 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="86287b666fb234e2c537ed34acd9c4be160edcf92d57cbb9c2326176b607b9d5" exitCode=137 Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.614338 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.614455 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.623960 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624017 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624090 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624090 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624174 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624214 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624267 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624371 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624619 4735 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624649 4735 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624668 4735 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.624689 4735 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.634840 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4735]: I0131 15:03:28.725505 4735 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:29 crc kubenswrapper[4735]: I0131 15:03:29.510571 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 15:03:29 crc kubenswrapper[4735]: I0131 15:03:29.510863 4735 scope.go:117] "RemoveContainer" containerID="86287b666fb234e2c537ed34acd9c4be160edcf92d57cbb9c2326176b607b9d5" Jan 31 15:03:29 crc kubenswrapper[4735]: I0131 15:03:29.511018 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 15:03:29 crc kubenswrapper[4735]: I0131 15:03:29.546700 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.548590 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hw9dp"] Jan 31 15:03:30 crc kubenswrapper[4735]: E0131 15:03:30.549014 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.549042 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.549299 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.551019 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.553295 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hw9dp"] Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.555543 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.649736 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-catalog-content\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.649795 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4644\" (UniqueName: \"kubernetes.io/projected/5d432da9-d868-493b-be4c-3cb7c8b9899e-kube-api-access-m4644\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.649817 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-utilities\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.740958 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6c52m"] Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.742130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.744546 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751692 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4644\" (UniqueName: \"kubernetes.io/projected/5d432da9-d868-493b-be4c-3cb7c8b9899e-kube-api-access-m4644\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751743 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-utilities\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751805 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj4j\" (UniqueName: \"kubernetes.io/projected/dd10e32d-9552-474f-abfa-5a15cb41d654-kube-api-access-nnj4j\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751834 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-utilities\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751876 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-catalog-content\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.751899 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-catalog-content\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.752643 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-utilities\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.752936 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d432da9-d868-493b-be4c-3cb7c8b9899e-catalog-content\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.762188 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6c52m"] Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.774859 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4644\" (UniqueName: \"kubernetes.io/projected/5d432da9-d868-493b-be4c-3cb7c8b9899e-kube-api-access-m4644\") pod \"community-operators-hw9dp\" (UID: \"5d432da9-d868-493b-be4c-3cb7c8b9899e\") " pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.853046 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj4j\" (UniqueName: \"kubernetes.io/projected/dd10e32d-9552-474f-abfa-5a15cb41d654-kube-api-access-nnj4j\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.853098 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-utilities\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.853144 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-catalog-content\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.853727 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-catalog-content\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.853916 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd10e32d-9552-474f-abfa-5a15cb41d654-utilities\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.869818 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:30 crc kubenswrapper[4735]: I0131 15:03:30.878000 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj4j\" (UniqueName: \"kubernetes.io/projected/dd10e32d-9552-474f-abfa-5a15cb41d654-kube-api-access-nnj4j\") pod \"redhat-operators-6c52m\" (UID: \"dd10e32d-9552-474f-abfa-5a15cb41d654\") " pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.056617 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.153056 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hw9dp"] Jan 31 15:03:31 crc kubenswrapper[4735]: W0131 15:03:31.202869 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d432da9_d868_493b_be4c_3cb7c8b9899e.slice/crio-667b465e92b6433a7cb3dd780640e17accc98922cb68153c7cc0faa1c8e291f3 WatchSource:0}: Error finding container 667b465e92b6433a7cb3dd780640e17accc98922cb68153c7cc0faa1c8e291f3: Status 404 returned error can't find the container with id 667b465e92b6433a7cb3dd780640e17accc98922cb68153c7cc0faa1c8e291f3 Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.285580 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6c52m"] Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.528178 4735 generic.go:334] "Generic (PLEG): container finished" podID="dd10e32d-9552-474f-abfa-5a15cb41d654" containerID="038dba0118479a0528122fd7e2789fa1758fa896a841a159a62a6a5ac61c815d" exitCode=0 Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.528280 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6c52m" event={"ID":"dd10e32d-9552-474f-abfa-5a15cb41d654","Type":"ContainerDied","Data":"038dba0118479a0528122fd7e2789fa1758fa896a841a159a62a6a5ac61c815d"} Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.528330 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6c52m" event={"ID":"dd10e32d-9552-474f-abfa-5a15cb41d654","Type":"ContainerStarted","Data":"a2450cbd028b8b430d835932af72b670c82a227296c87c76e38bf2143e6ad128"} Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.530272 4735 generic.go:334] "Generic (PLEG): container finished" podID="5d432da9-d868-493b-be4c-3cb7c8b9899e" containerID="5e720bd7f10818afa801041d6daaf0814dc2285f730a8b0573879bb9e93a745b" exitCode=0 Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.530330 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hw9dp" event={"ID":"5d432da9-d868-493b-be4c-3cb7c8b9899e","Type":"ContainerDied","Data":"5e720bd7f10818afa801041d6daaf0814dc2285f730a8b0573879bb9e93a745b"} Jan 31 15:03:31 crc kubenswrapper[4735]: I0131 15:03:31.530349 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hw9dp" event={"ID":"5d432da9-d868-493b-be4c-3cb7c8b9899e","Type":"ContainerStarted","Data":"667b465e92b6433a7cb3dd780640e17accc98922cb68153c7cc0faa1c8e291f3"} Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.542673 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hw9dp" event={"ID":"5d432da9-d868-493b-be4c-3cb7c8b9899e","Type":"ContainerStarted","Data":"1061cb637ac1c8264370f583d964298a517e8e27198de9579a8b3856fe41d006"} Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.544311 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6c52m" event={"ID":"dd10e32d-9552-474f-abfa-5a15cb41d654","Type":"ContainerStarted","Data":"9f0b8fcce51b8caaf16086ff05e7591da1a9a904264f9efdb1cdb1b667b7de2e"} Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.943263 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wvwvj"] Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.945351 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.947324 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 15:03:32 crc kubenswrapper[4735]: I0131 15:03:32.961414 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvwvj"] Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.092143 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-utilities\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.092340 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-catalog-content\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.092472 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbv7\" (UniqueName: \"kubernetes.io/projected/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-kube-api-access-wtbv7\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.149080 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7rmb"] Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.150479 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.154860 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.182534 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7rmb"] Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.193564 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-catalog-content\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.193762 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrkd\" (UniqueName: \"kubernetes.io/projected/bd27f576-2b59-40db-b27f-b41422fdeea3-kube-api-access-9qrkd\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.193796 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-utilities\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.193854 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-catalog-content\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.193964 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbv7\" (UniqueName: \"kubernetes.io/projected/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-kube-api-access-wtbv7\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.194008 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-utilities\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.194309 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-catalog-content\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.194373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-utilities\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.244744 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbv7\" (UniqueName: \"kubernetes.io/projected/8129b1bd-e5c2-4d3c-b631-b983b1a424c4-kube-api-access-wtbv7\") pod \"certified-operators-wvwvj\" (UID: \"8129b1bd-e5c2-4d3c-b631-b983b1a424c4\") " pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.270487 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.296240 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-catalog-content\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.296354 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrkd\" (UniqueName: \"kubernetes.io/projected/bd27f576-2b59-40db-b27f-b41422fdeea3-kube-api-access-9qrkd\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.296468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-utilities\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.297051 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-catalog-content\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.298861 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd27f576-2b59-40db-b27f-b41422fdeea3-utilities\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.319150 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrkd\" (UniqueName: \"kubernetes.io/projected/bd27f576-2b59-40db-b27f-b41422fdeea3-kube-api-access-9qrkd\") pod \"redhat-marketplace-r7rmb\" (UID: \"bd27f576-2b59-40db-b27f-b41422fdeea3\") " pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.467548 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.482938 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvwvj"] Jan 31 15:03:33 crc kubenswrapper[4735]: W0131 15:03:33.488980 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8129b1bd_e5c2_4d3c_b631_b983b1a424c4.slice/crio-5841efd2b3d78825f63eaf2d30681c638a3e92d12e146189a86a82ae7065c4e6 WatchSource:0}: Error finding container 5841efd2b3d78825f63eaf2d30681c638a3e92d12e146189a86a82ae7065c4e6: Status 404 returned error can't find the container with id 5841efd2b3d78825f63eaf2d30681c638a3e92d12e146189a86a82ae7065c4e6 Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.558598 4735 generic.go:334] "Generic (PLEG): container finished" podID="dd10e32d-9552-474f-abfa-5a15cb41d654" containerID="9f0b8fcce51b8caaf16086ff05e7591da1a9a904264f9efdb1cdb1b667b7de2e" exitCode=0 Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.564186 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvwvj" event={"ID":"8129b1bd-e5c2-4d3c-b631-b983b1a424c4","Type":"ContainerStarted","Data":"5841efd2b3d78825f63eaf2d30681c638a3e92d12e146189a86a82ae7065c4e6"} Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.564226 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6c52m" event={"ID":"dd10e32d-9552-474f-abfa-5a15cb41d654","Type":"ContainerDied","Data":"9f0b8fcce51b8caaf16086ff05e7591da1a9a904264f9efdb1cdb1b667b7de2e"} Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.567239 4735 generic.go:334] "Generic (PLEG): container finished" podID="5d432da9-d868-493b-be4c-3cb7c8b9899e" containerID="1061cb637ac1c8264370f583d964298a517e8e27198de9579a8b3856fe41d006" exitCode=0 Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.567271 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hw9dp" event={"ID":"5d432da9-d868-493b-be4c-3cb7c8b9899e","Type":"ContainerDied","Data":"1061cb637ac1c8264370f583d964298a517e8e27198de9579a8b3856fe41d006"} Jan 31 15:03:33 crc kubenswrapper[4735]: I0131 15:03:33.674514 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7rmb"] Jan 31 15:03:33 crc kubenswrapper[4735]: W0131 15:03:33.705961 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd27f576_2b59_40db_b27f_b41422fdeea3.slice/crio-01d62820dedeccd8f83febe97aeba7446cb5f2e19c23a370593c01335da051f5 WatchSource:0}: Error finding container 01d62820dedeccd8f83febe97aeba7446cb5f2e19c23a370593c01335da051f5: Status 404 returned error can't find the container with id 01d62820dedeccd8f83febe97aeba7446cb5f2e19c23a370593c01335da051f5 Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.574767 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6c52m" event={"ID":"dd10e32d-9552-474f-abfa-5a15cb41d654","Type":"ContainerStarted","Data":"93e333951d49391ca6f73796c7ad677105afa5724b5bc21f29fbd99c64830c9e"} Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.576926 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hw9dp" event={"ID":"5d432da9-d868-493b-be4c-3cb7c8b9899e","Type":"ContainerStarted","Data":"d6bab10060589fda64d43a0450568a5b4650131c404aa5226c9092bdace34d3c"} Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.578334 4735 generic.go:334] "Generic (PLEG): container finished" podID="bd27f576-2b59-40db-b27f-b41422fdeea3" containerID="092f2bc3705904ae92aa8e7aecf116830ae4f75fd8f8e5466f013d876664ee66" exitCode=0 Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.578404 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7rmb" event={"ID":"bd27f576-2b59-40db-b27f-b41422fdeea3","Type":"ContainerDied","Data":"092f2bc3705904ae92aa8e7aecf116830ae4f75fd8f8e5466f013d876664ee66"} Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.578452 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7rmb" event={"ID":"bd27f576-2b59-40db-b27f-b41422fdeea3","Type":"ContainerStarted","Data":"01d62820dedeccd8f83febe97aeba7446cb5f2e19c23a370593c01335da051f5"} Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.581160 4735 generic.go:334] "Generic (PLEG): container finished" podID="8129b1bd-e5c2-4d3c-b631-b983b1a424c4" containerID="dc83d195ea69f507affdb471efd266dceb4bcd12e06beeb79f10d33186f8ff1d" exitCode=0 Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.581188 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvwvj" event={"ID":"8129b1bd-e5c2-4d3c-b631-b983b1a424c4","Type":"ContainerDied","Data":"dc83d195ea69f507affdb471efd266dceb4bcd12e06beeb79f10d33186f8ff1d"} Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.601850 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6c52m" podStartSLOduration=2.095598938 podStartE2EDuration="4.60180629s" podCreationTimestamp="2026-01-31 15:03:30 +0000 UTC" firstStartedPulling="2026-01-31 15:03:31.529800625 +0000 UTC m=+297.303129677" lastFinishedPulling="2026-01-31 15:03:34.036007977 +0000 UTC m=+299.809337029" observedRunningTime="2026-01-31 15:03:34.598564807 +0000 UTC m=+300.371893869" watchObservedRunningTime="2026-01-31 15:03:34.60180629 +0000 UTC m=+300.375135332" Jan 31 15:03:34 crc kubenswrapper[4735]: I0131 15:03:34.618019 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hw9dp" podStartSLOduration=2.19363617 podStartE2EDuration="4.618004001s" podCreationTimestamp="2026-01-31 15:03:30 +0000 UTC" firstStartedPulling="2026-01-31 15:03:31.532141582 +0000 UTC m=+297.305470624" lastFinishedPulling="2026-01-31 15:03:33.956509403 +0000 UTC m=+299.729838455" observedRunningTime="2026-01-31 15:03:34.617291471 +0000 UTC m=+300.390620513" watchObservedRunningTime="2026-01-31 15:03:34.618004001 +0000 UTC m=+300.391333043" Jan 31 15:03:35 crc kubenswrapper[4735]: I0131 15:03:35.151925 4735 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 15:03:35 crc kubenswrapper[4735]: I0131 15:03:35.599141 4735 generic.go:334] "Generic (PLEG): container finished" podID="bd27f576-2b59-40db-b27f-b41422fdeea3" containerID="4528343e1998c07f671d15d18a5f74f9092a52f94ed700895701d09a9d13ea02" exitCode=0 Jan 31 15:03:35 crc kubenswrapper[4735]: I0131 15:03:35.599527 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7rmb" event={"ID":"bd27f576-2b59-40db-b27f-b41422fdeea3","Type":"ContainerDied","Data":"4528343e1998c07f671d15d18a5f74f9092a52f94ed700895701d09a9d13ea02"} Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.282594 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.283112 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" containerID="cri-o://ed8c4fdc8cc4e3f46a80f061e89d4cd4880f60082fbbb62445c3c027b7d0c538" gracePeriod=30 Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.380413 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.380732 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" containerID="cri-o://94eee674b8406b06605308427b2e6829c050f7a3c2a7265c327493dba5536299" gracePeriod=30 Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.610920 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7rmb" event={"ID":"bd27f576-2b59-40db-b27f-b41422fdeea3","Type":"ContainerStarted","Data":"dbf744b195a547e0e05a9ea09aa6f6683b19eae952544b0b3b125f4090b613d6"} Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.614097 4735 generic.go:334] "Generic (PLEG): container finished" podID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerID="ed8c4fdc8cc4e3f46a80f061e89d4cd4880f60082fbbb62445c3c027b7d0c538" exitCode=0 Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.614189 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" event={"ID":"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c","Type":"ContainerDied","Data":"ed8c4fdc8cc4e3f46a80f061e89d4cd4880f60082fbbb62445c3c027b7d0c538"} Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.640075 4735 generic.go:334] "Generic (PLEG): container finished" podID="8129b1bd-e5c2-4d3c-b631-b983b1a424c4" containerID="c448e00c2442b59247c124cc5fe643655a9fb612b7a3086fa1f42689f932652e" exitCode=0 Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.640727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvwvj" event={"ID":"8129b1bd-e5c2-4d3c-b631-b983b1a424c4","Type":"ContainerDied","Data":"c448e00c2442b59247c124cc5fe643655a9fb612b7a3086fa1f42689f932652e"} Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.653009 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7rmb" podStartSLOduration=2.197860114 podStartE2EDuration="3.652990844s" podCreationTimestamp="2026-01-31 15:03:33 +0000 UTC" firstStartedPulling="2026-01-31 15:03:34.580604596 +0000 UTC m=+300.353933638" lastFinishedPulling="2026-01-31 15:03:36.035735326 +0000 UTC m=+301.809064368" observedRunningTime="2026-01-31 15:03:36.646110588 +0000 UTC m=+302.419439630" watchObservedRunningTime="2026-01-31 15:03:36.652990844 +0000 UTC m=+302.426319886" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.676767 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" event={"ID":"3207b01f-1e8b-40df-8f73-8a46dbc61847","Type":"ContainerDied","Data":"94eee674b8406b06605308427b2e6829c050f7a3c2a7265c327493dba5536299"} Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.676775 4735 generic.go:334] "Generic (PLEG): container finished" podID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerID="94eee674b8406b06605308427b2e6829c050f7a3c2a7265c327493dba5536299" exitCode=0 Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.700567 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.771366 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.815981 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:36 crc kubenswrapper[4735]: E0131 15:03:36.816189 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.816201 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: E0131 15:03:36.816215 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.816221 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.816313 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.816338 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.816718 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.836116 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.843650 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles\") pod \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.843701 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert\") pod \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.843771 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4knjb\" (UniqueName: \"kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb\") pod \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.843803 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config\") pod \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.843822 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca\") pod \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\" (UID: \"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.844699 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" (UID: "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.844939 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" (UID: "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.846980 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config" (OuterVolumeSpecName: "config") pod "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" (UID: "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.849939 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" (UID: "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.852398 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb" (OuterVolumeSpecName: "kube-api-access-4knjb") pod "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" (UID: "a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c"). InnerVolumeSpecName "kube-api-access-4knjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.889623 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.890680 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.899207 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.944892 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config\") pod \"3207b01f-1e8b-40df-8f73-8a46dbc61847\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.944988 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca\") pod \"3207b01f-1e8b-40df-8f73-8a46dbc61847\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945015 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2mr8\" (UniqueName: \"kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8\") pod \"3207b01f-1e8b-40df-8f73-8a46dbc61847\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945067 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert\") pod \"3207b01f-1e8b-40df-8f73-8a46dbc61847\" (UID: \"3207b01f-1e8b-40df-8f73-8a46dbc61847\") " Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945193 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945225 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9nb8\" (UniqueName: \"kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945269 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945339 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945388 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945399 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945408 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945418 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945488 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4knjb\" (UniqueName: \"kubernetes.io/projected/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c-kube-api-access-4knjb\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.945854 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config" (OuterVolumeSpecName: "config") pod "3207b01f-1e8b-40df-8f73-8a46dbc61847" (UID: "3207b01f-1e8b-40df-8f73-8a46dbc61847"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.946033 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca" (OuterVolumeSpecName: "client-ca") pod "3207b01f-1e8b-40df-8f73-8a46dbc61847" (UID: "3207b01f-1e8b-40df-8f73-8a46dbc61847"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.948319 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3207b01f-1e8b-40df-8f73-8a46dbc61847" (UID: "3207b01f-1e8b-40df-8f73-8a46dbc61847"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:36 crc kubenswrapper[4735]: I0131 15:03:36.949807 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8" (OuterVolumeSpecName: "kube-api-access-s2mr8") pod "3207b01f-1e8b-40df-8f73-8a46dbc61847" (UID: "3207b01f-1e8b-40df-8f73-8a46dbc61847"). InnerVolumeSpecName "kube-api-access-s2mr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046278 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046324 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9nb8\" (UniqueName: \"kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046354 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046438 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046476 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046561 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbd6\" (UniqueName: \"kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046588 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046631 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046646 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3207b01f-1e8b-40df-8f73-8a46dbc61847-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046658 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2mr8\" (UniqueName: \"kubernetes.io/projected/3207b01f-1e8b-40df-8f73-8a46dbc61847-kube-api-access-s2mr8\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.046670 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3207b01f-1e8b-40df-8f73-8a46dbc61847-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.047641 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.047703 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.048155 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.051219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.060790 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9nb8\" (UniqueName: \"kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8\") pod \"controller-manager-6fb9d56fc7-r7xkz\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.132957 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.148371 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.148569 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.148665 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbd6\" (UniqueName: \"kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.148748 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.149436 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.149662 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.152031 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.174063 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbd6\" (UniqueName: \"kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6\") pod \"route-controller-manager-76c8f6f747-2rnm5\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.203216 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.374432 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.441912 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.569291 4735 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5rl28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.569683 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.667240 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.681986 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.684262 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvwvj" event={"ID":"8129b1bd-e5c2-4d3c-b631-b983b1a424c4","Type":"ContainerStarted","Data":"0f972247067bd35603b4c76abb27e6815156bdf14f7ad9ffe3684ef6eaeda5be"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.686668 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" event={"ID":"ab86c29d-1802-47c8-87c3-c915c29fdbf6","Type":"ContainerStarted","Data":"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.686711 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" event={"ID":"ab86c29d-1802-47c8-87c3-c915c29fdbf6","Type":"ContainerStarted","Data":"8cb8b8efaf4542f9f3a928876ce75e547fc2d3ca1bd1467c61ee8f44596c6bf2"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.688495 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.693274 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" event={"ID":"3207b01f-1e8b-40df-8f73-8a46dbc61847","Type":"ContainerDied","Data":"1154169686e1ba72dd9f168f8821d67f47d64e556e8ee4ada52decb15d0d1430"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.693283 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.693367 4735 scope.go:117] "RemoveContainer" containerID="94eee674b8406b06605308427b2e6829c050f7a3c2a7265c327493dba5536299" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.693789 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.695648 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" event={"ID":"ce46594d-615f-45c8-84da-ed2a7b81ebbd","Type":"ContainerStarted","Data":"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.695709 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.695723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" event={"ID":"ce46594d-615f-45c8-84da-ed2a7b81ebbd","Type":"ContainerStarted","Data":"3f31db4186425352219f22cb5691e856b30af00d5e023acd302f75a6a688db25"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.701846 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.702310 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5rl28" event={"ID":"a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c","Type":"ContainerDied","Data":"4b4bcbbdf9aa01b8114ec72ec2993e590588c5a85c2d7da1dfb934c41337c346"} Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.713253 4735 scope.go:117] "RemoveContainer" containerID="ed8c4fdc8cc4e3f46a80f061e89d4cd4880f60082fbbb62445c3c027b7d0c538" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.725583 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wvwvj" podStartSLOduration=3.108254711 podStartE2EDuration="5.725561678s" podCreationTimestamp="2026-01-31 15:03:32 +0000 UTC" firstStartedPulling="2026-01-31 15:03:34.583005994 +0000 UTC m=+300.356335036" lastFinishedPulling="2026-01-31 15:03:37.200312961 +0000 UTC m=+302.973642003" observedRunningTime="2026-01-31 15:03:37.71334857 +0000 UTC m=+303.486677622" watchObservedRunningTime="2026-01-31 15:03:37.725561678 +0000 UTC m=+303.498890730" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.729942 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.735298 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5rl28"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.736074 4735 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7qzcz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" start-of-body= Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.736117 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.773142 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" podStartSLOduration=1.773120163 podStartE2EDuration="1.773120163s" podCreationTimestamp="2026-01-31 15:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:37.757500408 +0000 UTC m=+303.530829470" watchObservedRunningTime="2026-01-31 15:03:37.773120163 +0000 UTC m=+303.546449205" Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.773326 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:03:37 crc kubenswrapper[4735]: I0131 15:03:37.779113 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7qzcz"] Jan 31 15:03:38 crc kubenswrapper[4735]: I0131 15:03:38.084660 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:38 crc kubenswrapper[4735]: I0131 15:03:38.104044 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" podStartSLOduration=2.104025916 podStartE2EDuration="2.104025916s" podCreationTimestamp="2026-01-31 15:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:37.793929165 +0000 UTC m=+303.567258227" watchObservedRunningTime="2026-01-31 15:03:38.104025916 +0000 UTC m=+303.877354968" Jan 31 15:03:38 crc kubenswrapper[4735]: I0131 15:03:38.707597 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" podUID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" containerName="controller-manager" containerID="cri-o://571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949" gracePeriod=30 Jan 31 15:03:38 crc kubenswrapper[4735]: I0131 15:03:38.708498 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" podUID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" containerName="route-controller-manager" containerID="cri-o://cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df" gracePeriod=30 Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.558027 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3207b01f-1e8b-40df-8f73-8a46dbc61847" path="/var/lib/kubelet/pods/3207b01f-1e8b-40df-8f73-8a46dbc61847/volumes" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.559048 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c" path="/var/lib/kubelet/pods/a6d0ccd7-c5f1-485d-9c2c-f27a0bbb388c/volumes" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.679731 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.687069 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.714665 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:39 crc kubenswrapper[4735]: E0131 15:03:39.714860 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" containerName="controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.714872 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" containerName="controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: E0131 15:03:39.714885 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" containerName="route-controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.714892 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" containerName="route-controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.714993 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" containerName="controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.715010 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" containerName="route-controller-manager" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.715358 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.718989 4735 generic.go:334] "Generic (PLEG): container finished" podID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" containerID="571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949" exitCode=0 Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.719033 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" event={"ID":"ab86c29d-1802-47c8-87c3-c915c29fdbf6","Type":"ContainerDied","Data":"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949"} Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.719054 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" event={"ID":"ab86c29d-1802-47c8-87c3-c915c29fdbf6","Type":"ContainerDied","Data":"8cb8b8efaf4542f9f3a928876ce75e547fc2d3ca1bd1467c61ee8f44596c6bf2"} Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.719070 4735 scope.go:117] "RemoveContainer" containerID="571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.719139 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.721085 4735 generic.go:334] "Generic (PLEG): container finished" podID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" containerID="cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df" exitCode=0 Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.721102 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" event={"ID":"ce46594d-615f-45c8-84da-ed2a7b81ebbd","Type":"ContainerDied","Data":"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df"} Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.721114 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" event={"ID":"ce46594d-615f-45c8-84da-ed2a7b81ebbd","Type":"ContainerDied","Data":"3f31db4186425352219f22cb5691e856b30af00d5e023acd302f75a6a688db25"} Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.721145 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.751602 4735 scope.go:117] "RemoveContainer" containerID="571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949" Jan 31 15:03:39 crc kubenswrapper[4735]: E0131 15:03:39.752206 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949\": container with ID starting with 571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949 not found: ID does not exist" containerID="571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.752271 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949"} err="failed to get container status \"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949\": rpc error: code = NotFound desc = could not find container \"571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949\": container with ID starting with 571503bab7c7c1547c28f71f0b50c723aa1e494f323e67351f85c38b9aca1949 not found: ID does not exist" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.752315 4735 scope.go:117] "RemoveContainer" containerID="cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.759109 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.780977 4735 scope.go:117] "RemoveContainer" containerID="cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df" Jan 31 15:03:39 crc kubenswrapper[4735]: E0131 15:03:39.781637 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df\": container with ID starting with cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df not found: ID does not exist" containerID="cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.781793 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df"} err="failed to get container status \"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df\": rpc error: code = NotFound desc = could not find container \"cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df\": container with ID starting with cbc16372391e5b7337fbb3dd2fffd0e9e497e104eef413153240caae2aa4f7df not found: ID does not exist" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.809646 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles\") pod \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810004 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca\") pod \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810122 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert\") pod \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810246 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca\") pod \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810344 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert\") pod \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810456 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9nb8\" (UniqueName: \"kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8\") pod \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810614 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbd6\" (UniqueName: \"kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6\") pod \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810730 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config\") pod \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\" (UID: \"ce46594d-615f-45c8-84da-ed2a7b81ebbd\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.810834 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config\") pod \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\" (UID: \"ab86c29d-1802-47c8-87c3-c915c29fdbf6\") " Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.811764 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.811004 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce46594d-615f-45c8-84da-ed2a7b81ebbd" (UID: "ce46594d-615f-45c8-84da-ed2a7b81ebbd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.811033 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ab86c29d-1802-47c8-87c3-c915c29fdbf6" (UID: "ab86c29d-1802-47c8-87c3-c915c29fdbf6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.811280 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config" (OuterVolumeSpecName: "config") pod "ce46594d-615f-45c8-84da-ed2a7b81ebbd" (UID: "ce46594d-615f-45c8-84da-ed2a7b81ebbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.811525 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config" (OuterVolumeSpecName: "config") pod "ab86c29d-1802-47c8-87c3-c915c29fdbf6" (UID: "ab86c29d-1802-47c8-87c3-c915c29fdbf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812337 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlss2\" (UniqueName: \"kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812610 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812724 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812753 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812772 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.812791 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce46594d-615f-45c8-84da-ed2a7b81ebbd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.816642 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca" (OuterVolumeSpecName: "client-ca") pod "ab86c29d-1802-47c8-87c3-c915c29fdbf6" (UID: "ab86c29d-1802-47c8-87c3-c915c29fdbf6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.828621 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab86c29d-1802-47c8-87c3-c915c29fdbf6" (UID: "ab86c29d-1802-47c8-87c3-c915c29fdbf6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.830259 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce46594d-615f-45c8-84da-ed2a7b81ebbd" (UID: "ce46594d-615f-45c8-84da-ed2a7b81ebbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.830921 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6" (OuterVolumeSpecName: "kube-api-access-ztbd6") pod "ce46594d-615f-45c8-84da-ed2a7b81ebbd" (UID: "ce46594d-615f-45c8-84da-ed2a7b81ebbd"). InnerVolumeSpecName "kube-api-access-ztbd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.835989 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8" (OuterVolumeSpecName: "kube-api-access-n9nb8") pod "ab86c29d-1802-47c8-87c3-c915c29fdbf6" (UID: "ab86c29d-1802-47c8-87c3-c915c29fdbf6"). InnerVolumeSpecName "kube-api-access-n9nb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913345 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlss2\" (UniqueName: \"kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913475 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913518 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913568 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbd6\" (UniqueName: \"kubernetes.io/projected/ce46594d-615f-45c8-84da-ed2a7b81ebbd-kube-api-access-ztbd6\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913608 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab86c29d-1802-47c8-87c3-c915c29fdbf6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913621 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab86c29d-1802-47c8-87c3-c915c29fdbf6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913633 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce46594d-615f-45c8-84da-ed2a7b81ebbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.913643 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9nb8\" (UniqueName: \"kubernetes.io/projected/ab86c29d-1802-47c8-87c3-c915c29fdbf6-kube-api-access-n9nb8\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.914875 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.915712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.919529 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:39 crc kubenswrapper[4735]: I0131 15:03:39.940061 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlss2\" (UniqueName: \"kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2\") pod \"route-controller-manager-589c67869d-68p5z\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.048496 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.056394 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.065946 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-r7xkz"] Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.094901 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.101156 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-2rnm5"] Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.327027 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:40 crc kubenswrapper[4735]: W0131 15:03:40.334521 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39397dd7_204c_4a41_be78_719c35bd432f.slice/crio-afea555f8a41bdf090bbbe35202f90596f5893d77d04f1799af4fb37a9f6f09a WatchSource:0}: Error finding container afea555f8a41bdf090bbbe35202f90596f5893d77d04f1799af4fb37a9f6f09a: Status 404 returned error can't find the container with id afea555f8a41bdf090bbbe35202f90596f5893d77d04f1799af4fb37a9f6f09a Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.731947 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" event={"ID":"39397dd7-204c-4a41-be78-719c35bd432f","Type":"ContainerStarted","Data":"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f"} Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.732354 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" event={"ID":"39397dd7-204c-4a41-be78-719c35bd432f","Type":"ContainerStarted","Data":"afea555f8a41bdf090bbbe35202f90596f5893d77d04f1799af4fb37a9f6f09a"} Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.733766 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.755969 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" podStartSLOduration=3.755955959 podStartE2EDuration="3.755955959s" podCreationTimestamp="2026-01-31 15:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:40.753612072 +0000 UTC m=+306.526941124" watchObservedRunningTime="2026-01-31 15:03:40.755955959 +0000 UTC m=+306.529285001" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.870311 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.870418 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:40 crc kubenswrapper[4735]: I0131 15:03:40.933919 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.057505 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.057553 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.157798 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.551145 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab86c29d-1802-47c8-87c3-c915c29fdbf6" path="/var/lib/kubelet/pods/ab86c29d-1802-47c8-87c3-c915c29fdbf6/volumes" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.552652 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce46594d-615f-45c8-84da-ed2a7b81ebbd" path="/var/lib/kubelet/pods/ce46594d-615f-45c8-84da-ed2a7b81ebbd/volumes" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.733331 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.734537 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.738300 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.738747 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.739195 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.739287 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.743155 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.743726 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.746817 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk74f\" (UniqueName: \"kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.746909 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.746946 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.746987 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.747044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.752198 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.755347 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.825306 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hw9dp" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.847948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.848092 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk74f\" (UniqueName: \"kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.848187 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.848223 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.848276 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.849328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.849537 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.849646 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.864823 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:41 crc kubenswrapper[4735]: I0131 15:03:41.875003 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk74f\" (UniqueName: \"kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f\") pod \"controller-manager-f4c688f4d-64nq9\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.058984 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.112365 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6c52m" podUID="dd10e32d-9552-474f-abfa-5a15cb41d654" containerName="registry-server" probeResult="failure" output=< Jan 31 15:03:42 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:03:42 crc kubenswrapper[4735]: > Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.348700 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.751872 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" event={"ID":"686fe389-c97f-4226-aa0d-e1be7012111e","Type":"ContainerStarted","Data":"e4cba00dce9c040353e1e78dfac78c3208f4acf1cfb0e6459b0c8310e634dccf"} Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.751932 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" event={"ID":"686fe389-c97f-4226-aa0d-e1be7012111e","Type":"ContainerStarted","Data":"50937ea2dbf204fccc21487b451b36571fcb45988b151bf4b3f79ca6befc2175"} Jan 31 15:03:42 crc kubenswrapper[4735]: I0131 15:03:42.781306 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" podStartSLOduration=5.7812782160000005 podStartE2EDuration="5.781278216s" podCreationTimestamp="2026-01-31 15:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:42.772694011 +0000 UTC m=+308.546023073" watchObservedRunningTime="2026-01-31 15:03:42.781278216 +0000 UTC m=+308.554607278" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.271340 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.271393 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.338054 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.468482 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.468958 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.559935 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.758701 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.770047 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.809046 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wvwvj" Jan 31 15:03:43 crc kubenswrapper[4735]: I0131 15:03:43.810032 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7rmb" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.636085 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gk69l"] Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.638089 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.665915 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gk69l"] Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666260 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-bound-sa-token\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666339 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-trusted-ca\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666377 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-registry-certificates\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666395 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-registry-tls\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666453 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666474 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b320cee8-f738-4158-bdb6-398279990a21-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666706 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b320cee8-f738-4158-bdb6-398279990a21-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.666800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxgf\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-kube-api-access-tpxgf\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.695691 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767756 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-trusted-ca\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767852 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-registry-certificates\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767879 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-registry-tls\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767923 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b320cee8-f738-4158-bdb6-398279990a21-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767956 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b320cee8-f738-4158-bdb6-398279990a21-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.767991 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxgf\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-kube-api-access-tpxgf\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.768026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-bound-sa-token\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.768762 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b320cee8-f738-4158-bdb6-398279990a21-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.769237 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-trusted-ca\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.769372 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b320cee8-f738-4158-bdb6-398279990a21-registry-certificates\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.774233 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b320cee8-f738-4158-bdb6-398279990a21-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.774255 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-registry-tls\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.788678 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-bound-sa-token\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.790561 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxgf\" (UniqueName: \"kubernetes.io/projected/b320cee8-f738-4158-bdb6-398279990a21-kube-api-access-tpxgf\") pod \"image-registry-66df7c8f76-gk69l\" (UID: \"b320cee8-f738-4158-bdb6-398279990a21\") " pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:50 crc kubenswrapper[4735]: I0131 15:03:50.968136 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.142861 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.193267 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6c52m" Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.498929 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gk69l"] Jan 31 15:03:51 crc kubenswrapper[4735]: W0131 15:03:51.507120 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb320cee8_f738_4158_bdb6_398279990a21.slice/crio-3aa2c930236461b02a69f3c1efa827a2cac7efa028eb88b18b29639307b0457b WatchSource:0}: Error finding container 3aa2c930236461b02a69f3c1efa827a2cac7efa028eb88b18b29639307b0457b: Status 404 returned error can't find the container with id 3aa2c930236461b02a69f3c1efa827a2cac7efa028eb88b18b29639307b0457b Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.811175 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" event={"ID":"b320cee8-f738-4158-bdb6-398279990a21","Type":"ContainerStarted","Data":"e4c44b8e89d7c3854ec38fefd10875a17eefa106df250272a6ad9d8473ca7473"} Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.811640 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" event={"ID":"b320cee8-f738-4158-bdb6-398279990a21","Type":"ContainerStarted","Data":"3aa2c930236461b02a69f3c1efa827a2cac7efa028eb88b18b29639307b0457b"} Jan 31 15:03:51 crc kubenswrapper[4735]: I0131 15:03:51.841027 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" podStartSLOduration=1.840994229 podStartE2EDuration="1.840994229s" podCreationTimestamp="2026-01-31 15:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:51.840193066 +0000 UTC m=+317.613522178" watchObservedRunningTime="2026-01-31 15:03:51.840994229 +0000 UTC m=+317.614323311" Jan 31 15:03:52 crc kubenswrapper[4735]: I0131 15:03:52.818197 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.716990 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.719020 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" podUID="686fe389-c97f-4226-aa0d-e1be7012111e" containerName="controller-manager" containerID="cri-o://e4cba00dce9c040353e1e78dfac78c3208f4acf1cfb0e6459b0c8310e634dccf" gracePeriod=30 Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.798285 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.798611 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" podUID="39397dd7-204c-4a41-be78-719c35bd432f" containerName="route-controller-manager" containerID="cri-o://ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f" gracePeriod=30 Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.856972 4735 generic.go:334] "Generic (PLEG): container finished" podID="686fe389-c97f-4226-aa0d-e1be7012111e" containerID="e4cba00dce9c040353e1e78dfac78c3208f4acf1cfb0e6459b0c8310e634dccf" exitCode=0 Jan 31 15:03:56 crc kubenswrapper[4735]: I0131 15:03:56.857037 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" event={"ID":"686fe389-c97f-4226-aa0d-e1be7012111e","Type":"ContainerDied","Data":"e4cba00dce9c040353e1e78dfac78c3208f4acf1cfb0e6459b0c8310e634dccf"} Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.276256 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.366568 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.367882 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert\") pod \"39397dd7-204c-4a41-be78-719c35bd432f\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.367961 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config\") pod \"39397dd7-204c-4a41-be78-719c35bd432f\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.367998 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlss2\" (UniqueName: \"kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2\") pod \"39397dd7-204c-4a41-be78-719c35bd432f\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.368083 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca\") pod \"39397dd7-204c-4a41-be78-719c35bd432f\" (UID: \"39397dd7-204c-4a41-be78-719c35bd432f\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.368768 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca" (OuterVolumeSpecName: "client-ca") pod "39397dd7-204c-4a41-be78-719c35bd432f" (UID: "39397dd7-204c-4a41-be78-719c35bd432f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.368805 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config" (OuterVolumeSpecName: "config") pod "39397dd7-204c-4a41-be78-719c35bd432f" (UID: "39397dd7-204c-4a41-be78-719c35bd432f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.373349 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2" (OuterVolumeSpecName: "kube-api-access-mlss2") pod "39397dd7-204c-4a41-be78-719c35bd432f" (UID: "39397dd7-204c-4a41-be78-719c35bd432f"). InnerVolumeSpecName "kube-api-access-mlss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.373460 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39397dd7-204c-4a41-be78-719c35bd432f" (UID: "39397dd7-204c-4a41-be78-719c35bd432f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469357 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles\") pod \"686fe389-c97f-4226-aa0d-e1be7012111e\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469443 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk74f\" (UniqueName: \"kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f\") pod \"686fe389-c97f-4226-aa0d-e1be7012111e\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469492 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert\") pod \"686fe389-c97f-4226-aa0d-e1be7012111e\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469527 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config\") pod \"686fe389-c97f-4226-aa0d-e1be7012111e\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469571 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca\") pod \"686fe389-c97f-4226-aa0d-e1be7012111e\" (UID: \"686fe389-c97f-4226-aa0d-e1be7012111e\") " Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469843 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39397dd7-204c-4a41-be78-719c35bd432f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469860 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469873 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlss2\" (UniqueName: \"kubernetes.io/projected/39397dd7-204c-4a41-be78-719c35bd432f-kube-api-access-mlss2\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.469887 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39397dd7-204c-4a41-be78-719c35bd432f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.470190 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "686fe389-c97f-4226-aa0d-e1be7012111e" (UID: "686fe389-c97f-4226-aa0d-e1be7012111e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.470196 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca" (OuterVolumeSpecName: "client-ca") pod "686fe389-c97f-4226-aa0d-e1be7012111e" (UID: "686fe389-c97f-4226-aa0d-e1be7012111e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.470255 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config" (OuterVolumeSpecName: "config") pod "686fe389-c97f-4226-aa0d-e1be7012111e" (UID: "686fe389-c97f-4226-aa0d-e1be7012111e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.472788 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "686fe389-c97f-4226-aa0d-e1be7012111e" (UID: "686fe389-c97f-4226-aa0d-e1be7012111e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.472951 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f" (OuterVolumeSpecName: "kube-api-access-jk74f") pod "686fe389-c97f-4226-aa0d-e1be7012111e" (UID: "686fe389-c97f-4226-aa0d-e1be7012111e"). InnerVolumeSpecName "kube-api-access-jk74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.571829 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.571867 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.571880 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk74f\" (UniqueName: \"kubernetes.io/projected/686fe389-c97f-4226-aa0d-e1be7012111e-kube-api-access-jk74f\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.571891 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686fe389-c97f-4226-aa0d-e1be7012111e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.571900 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686fe389-c97f-4226-aa0d-e1be7012111e-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.865235 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.865227 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4c688f4d-64nq9" event={"ID":"686fe389-c97f-4226-aa0d-e1be7012111e","Type":"ContainerDied","Data":"50937ea2dbf204fccc21487b451b36571fcb45988b151bf4b3f79ca6befc2175"} Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.865376 4735 scope.go:117] "RemoveContainer" containerID="e4cba00dce9c040353e1e78dfac78c3208f4acf1cfb0e6459b0c8310e634dccf" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.866599 4735 generic.go:334] "Generic (PLEG): container finished" podID="39397dd7-204c-4a41-be78-719c35bd432f" containerID="ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f" exitCode=0 Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.866627 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" event={"ID":"39397dd7-204c-4a41-be78-719c35bd432f","Type":"ContainerDied","Data":"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f"} Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.866647 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" event={"ID":"39397dd7-204c-4a41-be78-719c35bd432f","Type":"ContainerDied","Data":"afea555f8a41bdf090bbbe35202f90596f5893d77d04f1799af4fb37a9f6f09a"} Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.866692 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.887844 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.894683 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-589c67869d-68p5z"] Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.894771 4735 scope.go:117] "RemoveContainer" containerID="ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.899089 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.902729 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f4c688f4d-64nq9"] Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.912676 4735 scope.go:117] "RemoveContainer" containerID="ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f" Jan 31 15:03:57 crc kubenswrapper[4735]: E0131 15:03:57.913679 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f\": container with ID starting with ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f not found: ID does not exist" containerID="ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f" Jan 31 15:03:57 crc kubenswrapper[4735]: I0131 15:03:57.913739 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f"} err="failed to get container status \"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f\": rpc error: code = NotFound desc = could not find container \"ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f\": container with ID starting with ee244aef907b78eaadaac191c2a5cc98d6b421b4ad975bc7cdb9c9aede30d41f not found: ID does not exist" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.745087 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26"] Jan 31 15:03:58 crc kubenswrapper[4735]: E0131 15:03:58.745707 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39397dd7-204c-4a41-be78-719c35bd432f" containerName="route-controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.745723 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="39397dd7-204c-4a41-be78-719c35bd432f" containerName="route-controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: E0131 15:03:58.745740 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686fe389-c97f-4226-aa0d-e1be7012111e" containerName="controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.745748 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="686fe389-c97f-4226-aa0d-e1be7012111e" containerName="controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.745871 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="39397dd7-204c-4a41-be78-719c35bd432f" containerName="route-controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.745892 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="686fe389-c97f-4226-aa0d-e1be7012111e" containerName="controller-manager" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.746407 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.749550 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.749584 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.749870 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.750203 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.751226 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9"] Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.752107 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.753397 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.754393 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.758348 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.758956 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.759262 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.759730 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.759789 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.760694 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.762460 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26"] Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.769196 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.790886 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9"] Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890542 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-client-ca\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890591 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e525bb9-b139-4e48-a867-2deb9e706574-serving-cert\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890615 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-config\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890635 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jj6\" (UniqueName: \"kubernetes.io/projected/d6416e5b-f3af-4930-a9e6-03481928b897-kube-api-access-b8jj6\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890659 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-client-ca\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890879 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-config\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.890975 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6416e5b-f3af-4930-a9e6-03481928b897-serving-cert\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.891051 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txwb\" (UniqueName: \"kubernetes.io/projected/8e525bb9-b139-4e48-a867-2deb9e706574-kube-api-access-5txwb\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.891131 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.993025 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.993180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-client-ca\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.993948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e525bb9-b139-4e48-a867-2deb9e706574-serving-cert\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.994056 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-config\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.994097 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jj6\" (UniqueName: \"kubernetes.io/projected/d6416e5b-f3af-4930-a9e6-03481928b897-kube-api-access-b8jj6\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.994156 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-client-ca\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.995185 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-config\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.995294 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6416e5b-f3af-4930-a9e6-03481928b897-serving-cert\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.995368 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txwb\" (UniqueName: \"kubernetes.io/projected/8e525bb9-b139-4e48-a867-2deb9e706574-kube-api-access-5txwb\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.995656 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-client-ca\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.996492 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-client-ca\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:58 crc kubenswrapper[4735]: I0131 15:03:58.998956 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6416e5b-f3af-4930-a9e6-03481928b897-config\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.008062 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-config\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.008736 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e525bb9-b139-4e48-a867-2deb9e706574-proxy-ca-bundles\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.010170 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6416e5b-f3af-4930-a9e6-03481928b897-serving-cert\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.012280 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e525bb9-b139-4e48-a867-2deb9e706574-serving-cert\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.024889 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txwb\" (UniqueName: \"kubernetes.io/projected/8e525bb9-b139-4e48-a867-2deb9e706574-kube-api-access-5txwb\") pod \"controller-manager-6fb9d56fc7-54jk9\" (UID: \"8e525bb9-b139-4e48-a867-2deb9e706574\") " pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.038937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jj6\" (UniqueName: \"kubernetes.io/projected/d6416e5b-f3af-4930-a9e6-03481928b897-kube-api-access-b8jj6\") pod \"route-controller-manager-76c8f6f747-xfb26\" (UID: \"d6416e5b-f3af-4930-a9e6-03481928b897\") " pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.069538 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.078966 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: W0131 15:03:59.547871 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6416e5b_f3af_4930_a9e6_03481928b897.slice/crio-b8612e3be724fd50e55213a909adce8462b6aaa8f84fceea197405b1504bf7b1 WatchSource:0}: Error finding container b8612e3be724fd50e55213a909adce8462b6aaa8f84fceea197405b1504bf7b1: Status 404 returned error can't find the container with id b8612e3be724fd50e55213a909adce8462b6aaa8f84fceea197405b1504bf7b1 Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.559969 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39397dd7-204c-4a41-be78-719c35bd432f" path="/var/lib/kubelet/pods/39397dd7-204c-4a41-be78-719c35bd432f/volumes" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.561135 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686fe389-c97f-4226-aa0d-e1be7012111e" path="/var/lib/kubelet/pods/686fe389-c97f-4226-aa0d-e1be7012111e/volumes" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.561883 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26"] Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.593318 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9"] Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.882105 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" event={"ID":"d6416e5b-f3af-4930-a9e6-03481928b897","Type":"ContainerStarted","Data":"1e2472ed6e15a1d3539fa94209f452bf5cd064af30e4e2ef6d1c13e91dc90e73"} Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.882466 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" event={"ID":"d6416e5b-f3af-4930-a9e6-03481928b897","Type":"ContainerStarted","Data":"b8612e3be724fd50e55213a909adce8462b6aaa8f84fceea197405b1504bf7b1"} Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.882849 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.884720 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" event={"ID":"8e525bb9-b139-4e48-a867-2deb9e706574","Type":"ContainerStarted","Data":"f02701d69d19bfc235e35db8e0a7f4878bba931c2079477d078964c8b8a27a20"} Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.884878 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" event={"ID":"8e525bb9-b139-4e48-a867-2deb9e706574","Type":"ContainerStarted","Data":"904f4aa48bf1dc893e3fcc1952b629420bbefcf5a5575a544f204bd875729107"} Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.884981 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.890039 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.901804 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" podStartSLOduration=3.90178928 podStartE2EDuration="3.90178928s" podCreationTimestamp="2026-01-31 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:59.899574805 +0000 UTC m=+325.672903877" watchObservedRunningTime="2026-01-31 15:03:59.90178928 +0000 UTC m=+325.675118332" Jan 31 15:03:59 crc kubenswrapper[4735]: I0131 15:03:59.923443 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb9d56fc7-54jk9" podStartSLOduration=3.923399598 podStartE2EDuration="3.923399598s" podCreationTimestamp="2026-01-31 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:59.919148231 +0000 UTC m=+325.692477303" watchObservedRunningTime="2026-01-31 15:03:59.923399598 +0000 UTC m=+325.696728640" Jan 31 15:04:00 crc kubenswrapper[4735]: I0131 15:04:00.257781 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76c8f6f747-xfb26" Jan 31 15:04:10 crc kubenswrapper[4735]: I0131 15:04:10.978340 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gk69l" Jan 31 15:04:11 crc kubenswrapper[4735]: I0131 15:04:11.048818 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.147549 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" podUID="e924aff1-607d-40b9-91a4-14813ff15844" containerName="registry" containerID="cri-o://f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5" gracePeriod=30 Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.593447 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.667122 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.667459 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.667541 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.667665 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.668501 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.668619 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.668724 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c82w\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.668820 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates\") pod \"e924aff1-607d-40b9-91a4-14813ff15844\" (UID: \"e924aff1-607d-40b9-91a4-14813ff15844\") " Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.669413 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.670020 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.674758 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.674901 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.675283 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.675784 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w" (OuterVolumeSpecName: "kube-api-access-8c82w") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "kube-api-access-8c82w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.681812 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.683836 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e924aff1-607d-40b9-91a4-14813ff15844" (UID: "e924aff1-607d-40b9-91a4-14813ff15844"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771504 4735 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e924aff1-607d-40b9-91a4-14813ff15844-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771601 4735 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e924aff1-607d-40b9-91a4-14813ff15844-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771636 4735 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771661 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771686 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771710 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c82w\" (UniqueName: \"kubernetes.io/projected/e924aff1-607d-40b9-91a4-14813ff15844-kube-api-access-8c82w\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4735]: I0131 15:04:36.771730 4735 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e924aff1-607d-40b9-91a4-14813ff15844-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.113653 4735 generic.go:334] "Generic (PLEG): container finished" podID="e924aff1-607d-40b9-91a4-14813ff15844" containerID="f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5" exitCode=0 Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.113722 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" event={"ID":"e924aff1-607d-40b9-91a4-14813ff15844","Type":"ContainerDied","Data":"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5"} Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.113764 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" event={"ID":"e924aff1-607d-40b9-91a4-14813ff15844","Type":"ContainerDied","Data":"50e9e1ba8a5b64a56911f229fabe007944ddef40d140c57a36a4c6b84b5f1cd0"} Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.113793 4735 scope.go:117] "RemoveContainer" containerID="f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.113964 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t7kmx" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.133809 4735 scope.go:117] "RemoveContainer" containerID="f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5" Jan 31 15:04:37 crc kubenswrapper[4735]: E0131 15:04:37.134411 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5\": container with ID starting with f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5 not found: ID does not exist" containerID="f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.134478 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5"} err="failed to get container status \"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5\": rpc error: code = NotFound desc = could not find container \"f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5\": container with ID starting with f766248eb92790d6976aba4f46db61bf04c03070f2144a4a7132aff1deef64f5 not found: ID does not exist" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.160952 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.176744 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t7kmx"] Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.346109 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.346199 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:37 crc kubenswrapper[4735]: I0131 15:04:37.593101 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e924aff1-607d-40b9-91a4-14813ff15844" path="/var/lib/kubelet/pods/e924aff1-607d-40b9-91a4-14813ff15844/volumes" Jan 31 15:05:07 crc kubenswrapper[4735]: I0131 15:05:07.346212 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:05:07 crc kubenswrapper[4735]: I0131 15:05:07.346944 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.346208 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.347044 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.347132 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.348265 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.348390 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271" gracePeriod=600 Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.544792 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271" exitCode=0 Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.554220 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271"} Jan 31 15:05:37 crc kubenswrapper[4735]: I0131 15:05:37.554449 4735 scope.go:117] "RemoveContainer" containerID="7ac5f364c2c1e57fff70ae4b6c0b718b1ade72af53d2b6fe0da8fb1d189e3329" Jan 31 15:05:38 crc kubenswrapper[4735]: I0131 15:05:38.569062 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377"} Jan 31 15:07:37 crc kubenswrapper[4735]: I0131 15:07:37.346534 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:07:37 crc kubenswrapper[4735]: I0131 15:07:37.347465 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.455057 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2"] Jan 31 15:07:57 crc kubenswrapper[4735]: E0131 15:07:57.455871 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e924aff1-607d-40b9-91a4-14813ff15844" containerName="registry" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.455886 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e924aff1-607d-40b9-91a4-14813ff15844" containerName="registry" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.455999 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e924aff1-607d-40b9-91a4-14813ff15844" containerName="registry" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.456458 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.458669 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.458803 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-svdmv" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.458836 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.463204 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2x2bm"] Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.464268 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2x2bm" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.475094 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wlqhl" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.478191 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2"] Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.485117 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25p6r"] Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.485730 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.490199 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hjxzf" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.491957 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2x2bm"] Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.519550 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25p6r"] Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.584100 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqq4\" (UniqueName: \"kubernetes.io/projected/3cca71ec-1c99-4260-9208-9e4202ff3e3e-kube-api-access-qrqq4\") pod \"cert-manager-cainjector-cf98fcc89-5t4c2\" (UID: \"3cca71ec-1c99-4260-9208-9e4202ff3e3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.584143 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jblbs\" (UniqueName: \"kubernetes.io/projected/1715eb23-6cf5-4a8f-9d53-11fae6b38859-kube-api-access-jblbs\") pod \"cert-manager-858654f9db-2x2bm\" (UID: \"1715eb23-6cf5-4a8f-9d53-11fae6b38859\") " pod="cert-manager/cert-manager-858654f9db-2x2bm" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.685311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqq4\" (UniqueName: \"kubernetes.io/projected/3cca71ec-1c99-4260-9208-9e4202ff3e3e-kube-api-access-qrqq4\") pod \"cert-manager-cainjector-cf98fcc89-5t4c2\" (UID: \"3cca71ec-1c99-4260-9208-9e4202ff3e3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.685361 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jblbs\" (UniqueName: \"kubernetes.io/projected/1715eb23-6cf5-4a8f-9d53-11fae6b38859-kube-api-access-jblbs\") pod \"cert-manager-858654f9db-2x2bm\" (UID: \"1715eb23-6cf5-4a8f-9d53-11fae6b38859\") " pod="cert-manager/cert-manager-858654f9db-2x2bm" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.685484 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frwc\" (UniqueName: \"kubernetes.io/projected/7032d6de-d341-4686-a7c9-f470bf8237cb-kube-api-access-5frwc\") pod \"cert-manager-webhook-687f57d79b-25p6r\" (UID: \"7032d6de-d341-4686-a7c9-f470bf8237cb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.704710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jblbs\" (UniqueName: \"kubernetes.io/projected/1715eb23-6cf5-4a8f-9d53-11fae6b38859-kube-api-access-jblbs\") pod \"cert-manager-858654f9db-2x2bm\" (UID: \"1715eb23-6cf5-4a8f-9d53-11fae6b38859\") " pod="cert-manager/cert-manager-858654f9db-2x2bm" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.708450 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqq4\" (UniqueName: \"kubernetes.io/projected/3cca71ec-1c99-4260-9208-9e4202ff3e3e-kube-api-access-qrqq4\") pod \"cert-manager-cainjector-cf98fcc89-5t4c2\" (UID: \"3cca71ec-1c99-4260-9208-9e4202ff3e3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.786362 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5frwc\" (UniqueName: \"kubernetes.io/projected/7032d6de-d341-4686-a7c9-f470bf8237cb-kube-api-access-5frwc\") pod \"cert-manager-webhook-687f57d79b-25p6r\" (UID: \"7032d6de-d341-4686-a7c9-f470bf8237cb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.786750 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.804689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5frwc\" (UniqueName: \"kubernetes.io/projected/7032d6de-d341-4686-a7c9-f470bf8237cb-kube-api-access-5frwc\") pod \"cert-manager-webhook-687f57d79b-25p6r\" (UID: \"7032d6de-d341-4686-a7c9-f470bf8237cb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.804906 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2x2bm" Jan 31 15:07:57 crc kubenswrapper[4735]: I0131 15:07:57.817651 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.020947 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2"] Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.035243 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:07:58 crc kubenswrapper[4735]: W0131 15:07:58.307642 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7032d6de_d341_4686_a7c9_f470bf8237cb.slice/crio-531fd7b8340f1deb5eab2e8c1a5a09c18073b632606a701be13d24b9ae338edd WatchSource:0}: Error finding container 531fd7b8340f1deb5eab2e8c1a5a09c18073b632606a701be13d24b9ae338edd: Status 404 returned error can't find the container with id 531fd7b8340f1deb5eab2e8c1a5a09c18073b632606a701be13d24b9ae338edd Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.310788 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-25p6r"] Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.318575 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2x2bm"] Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.471449 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" event={"ID":"7032d6de-d341-4686-a7c9-f470bf8237cb","Type":"ContainerStarted","Data":"531fd7b8340f1deb5eab2e8c1a5a09c18073b632606a701be13d24b9ae338edd"} Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.472751 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" event={"ID":"3cca71ec-1c99-4260-9208-9e4202ff3e3e","Type":"ContainerStarted","Data":"cd347d01ac3ec05f5b5c80f80c9f20f9b688fd62099bd6b4c5b7acaca99e891d"} Jan 31 15:07:58 crc kubenswrapper[4735]: I0131 15:07:58.473769 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2x2bm" event={"ID":"1715eb23-6cf5-4a8f-9d53-11fae6b38859","Type":"ContainerStarted","Data":"d9f42e0d9470774b8e228538a2ed0f308aeebfe23da8bdb87697fa8151675f7f"} Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.512480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" event={"ID":"7032d6de-d341-4686-a7c9-f470bf8237cb","Type":"ContainerStarted","Data":"2eb94a5bcb0eaf9a10f7f9b199c52fbcfdf521fc0f679ab53a332e8f653c2b74"} Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.513295 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.515223 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" event={"ID":"3cca71ec-1c99-4260-9208-9e4202ff3e3e","Type":"ContainerStarted","Data":"29f79f79f215ca999f59298474704028c1387fa7de09c833b35f549c35ba4d13"} Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.519740 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2x2bm" event={"ID":"1715eb23-6cf5-4a8f-9d53-11fae6b38859","Type":"ContainerStarted","Data":"8680e047684c5f8328685d3e8f16bad2addab88362585cc6233a989c7c57454f"} Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.541828 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" podStartSLOduration=2.424433558 podStartE2EDuration="5.541804844s" podCreationTimestamp="2026-01-31 15:07:57 +0000 UTC" firstStartedPulling="2026-01-31 15:07:58.309993494 +0000 UTC m=+564.083322566" lastFinishedPulling="2026-01-31 15:08:01.4273648 +0000 UTC m=+567.200693852" observedRunningTime="2026-01-31 15:08:02.533697845 +0000 UTC m=+568.307026937" watchObservedRunningTime="2026-01-31 15:08:02.541804844 +0000 UTC m=+568.315133926" Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.603843 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2x2bm" podStartSLOduration=2.482319279 podStartE2EDuration="5.603822581s" podCreationTimestamp="2026-01-31 15:07:57 +0000 UTC" firstStartedPulling="2026-01-31 15:07:58.310012095 +0000 UTC m=+564.083341147" lastFinishedPulling="2026-01-31 15:08:01.431515367 +0000 UTC m=+567.204844449" observedRunningTime="2026-01-31 15:08:02.561822248 +0000 UTC m=+568.335151320" watchObservedRunningTime="2026-01-31 15:08:02.603822581 +0000 UTC m=+568.377151623" Jan 31 15:08:02 crc kubenswrapper[4735]: I0131 15:08:02.604333 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5t4c2" podStartSLOduration=2.2254294310000002 podStartE2EDuration="5.604327205s" podCreationTimestamp="2026-01-31 15:07:57 +0000 UTC" firstStartedPulling="2026-01-31 15:07:58.035016038 +0000 UTC m=+563.808345080" lastFinishedPulling="2026-01-31 15:08:01.413913802 +0000 UTC m=+567.187242854" observedRunningTime="2026-01-31 15:08:02.598351647 +0000 UTC m=+568.371680719" watchObservedRunningTime="2026-01-31 15:08:02.604327205 +0000 UTC m=+568.377656247" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.346338 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.347245 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.471684 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c6zv"] Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.472199 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-controller" containerID="cri-o://aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.472697 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="sbdb" containerID="cri-o://88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.472770 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="nbdb" containerID="cri-o://a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.472832 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="northd" containerID="cri-o://b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.473751 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.473868 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-node" containerID="cri-o://7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.473935 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-acl-logging" containerID="cri-o://714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.567007 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" containerID="cri-o://bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" gracePeriod=30 Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.820776 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-25p6r" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.824258 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/3.log" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.826722 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovn-acl-logging/0.log" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.827266 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovn-controller/0.log" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.827812 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884452 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dt5mn"] Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884661 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884674 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884683 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-node" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884690 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-node" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884698 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kubecfg-setup" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884708 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kubecfg-setup" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884718 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884725 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884740 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="sbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884746 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="sbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884756 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884764 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884779 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="nbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884786 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="nbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884794 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884801 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884811 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="northd" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884817 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="northd" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884830 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-acl-logging" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884838 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-acl-logging" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.884849 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884858 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884960 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="nbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884969 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884977 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-node" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884985 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884992 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="sbdb" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.884998 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885004 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885011 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885019 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885028 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885036 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="northd" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885043 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovn-acl-logging" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.885124 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885131 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: E0131 15:08:07.885140 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.885145 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerName="ovnkube-controller" Jan 31 15:08:07 crc kubenswrapper[4735]: I0131 15:08:07.887114 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.016849 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.016919 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.016975 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017039 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017069 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket" (OuterVolumeSpecName: "log-socket") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017099 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017141 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017153 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017207 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017252 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017311 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017372 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017400 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017460 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017561 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017602 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017653 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017690 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017892 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017985 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018035 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018055 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018092 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6td7\" (UniqueName: \"kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018137 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018203 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd\") pod \"b0c86d4a-441f-4e3b-be28-632dadd81e81\" (UID: \"b0c86d4a-441f-4e3b-be28-632dadd81e81\") " Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018214 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018235 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash" (OuterVolumeSpecName: "host-slash") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018265 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log" (OuterVolumeSpecName: "node-log") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018302 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018345 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018409 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018490 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vnbb\" (UniqueName: \"kubernetes.io/projected/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-kube-api-access-9vnbb\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.017187 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018551 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-node-log\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018705 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018757 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-config\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018832 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-log-socket\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018901 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-netd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018944 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-etc-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.018974 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019001 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-ovn\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019094 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019167 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovn-node-metrics-cert\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019268 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019309 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-netns\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019376 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-bin\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019454 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-systemd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-script-lib\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-env-overrides\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019631 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-kubelet\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019678 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-systemd-units\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019716 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-var-lib-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019760 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019876 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-slash\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.019921 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020000 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020057 4735 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020086 4735 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020113 4735 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020142 4735 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020172 4735 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020191 4735 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020215 4735 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020241 4735 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020265 4735 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020292 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020311 4735 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020337 4735 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020363 4735 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020391 4735 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.020416 4735 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.025569 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.026067 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7" (OuterVolumeSpecName: "kube-api-access-g6td7") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "kube-api-access-g6td7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.043150 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b0c86d4a-441f-4e3b-be28-632dadd81e81" (UID: "b0c86d4a-441f-4e3b-be28-632dadd81e81"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121476 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-netns\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121529 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-bin\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121556 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-systemd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121579 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-script-lib\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121610 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-kubelet\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121693 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-bin\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121734 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-netns\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121704 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-systemd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.121873 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-kubelet\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122513 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-script-lib\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122582 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-env-overrides\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122609 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-systemd-units\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122633 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-var-lib-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122655 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-slash\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vnbb\" (UniqueName: \"kubernetes.io/projected/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-kube-api-access-9vnbb\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122737 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-node-log\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122764 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122791 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-config\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-log-socket\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122850 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-netd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122873 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-etc-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122895 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-ovn\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122920 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.122954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovn-node-metrics-cert\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123008 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6td7\" (UniqueName: \"kubernetes.io/projected/b0c86d4a-441f-4e3b-be28-632dadd81e81-kube-api-access-g6td7\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123023 4735 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0c86d4a-441f-4e3b-be28-632dadd81e81-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123037 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123050 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0c86d4a-441f-4e3b-be28-632dadd81e81-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123581 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-log-socket\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123681 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-slash\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123687 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-etc-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123707 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-ovn\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123734 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-systemd-units\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123741 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123760 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-run-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123767 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-node-log\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123775 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-var-lib-openvswitch\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123787 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123821 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-host-cni-netd\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.123963 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-env-overrides\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.124811 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovnkube-config\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.128974 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-ovn-node-metrics-cert\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.150634 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vnbb\" (UniqueName: \"kubernetes.io/projected/e9f6fa2f-1f29-4232-af0d-b11e46ea70bd-kube-api-access-9vnbb\") pod \"ovnkube-node-dt5mn\" (UID: \"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.201651 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.573873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/2.log" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.574914 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/1.log" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.574976 4735 generic.go:334] "Generic (PLEG): container finished" podID="671e4f66-1c2f-436a-800d-fd3840e9830d" containerID="c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7" exitCode=2 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.575069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerDied","Data":"c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.575125 4735 scope.go:117] "RemoveContainer" containerID="c9f6d8b74ba8ff4e981e1a56c8117e5259c0593ea021444db48eaa54652839b5" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.575921 4735 scope.go:117] "RemoveContainer" containerID="c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.576290 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hg7gl_openshift-multus(671e4f66-1c2f-436a-800d-fd3840e9830d)\"" pod="openshift-multus/multus-hg7gl" podUID="671e4f66-1c2f-436a-800d-fd3840e9830d" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.580529 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovnkube-controller/3.log" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.586026 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovn-acl-logging/0.log" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587105 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2c6zv_b0c86d4a-441f-4e3b-be28-632dadd81e81/ovn-controller/0.log" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587834 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587876 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587892 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587905 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587919 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587933 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587945 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" exitCode=143 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.587958 4735 generic.go:334] "Generic (PLEG): container finished" podID="b0c86d4a-441f-4e3b-be28-632dadd81e81" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" exitCode=143 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588025 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588068 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588089 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588109 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588126 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588144 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588163 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588180 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588191 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588203 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588214 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588225 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588236 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588247 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588258 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588268 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588282 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588297 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588309 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588322 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588332 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588343 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588354 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588364 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588374 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588385 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588395 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588411 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588473 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588492 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588506 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588520 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588534 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588547 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588560 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588574 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588587 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588605 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588626 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" event={"ID":"b0c86d4a-441f-4e3b-be28-632dadd81e81","Type":"ContainerDied","Data":"e277af77be4f660fa8c7949f4f990ae5ef00f5b2fa458e089bc6f62f9a09801f"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588648 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588663 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588676 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588690 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588704 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588719 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588733 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588746 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588760 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588774 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.588931 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2c6zv" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.594142 4735 generic.go:334] "Generic (PLEG): container finished" podID="e9f6fa2f-1f29-4232-af0d-b11e46ea70bd" containerID="2e70ab793686db063214d85f547c609675bbfbc325caab5210297e764b612cd3" exitCode=0 Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.594202 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerDied","Data":"2e70ab793686db063214d85f547c609675bbfbc325caab5210297e764b612cd3"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.594241 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"6f19092414c25fdac568e367bc93c096e14742df6c2317a18544542837c9c361"} Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.710024 4735 scope.go:117] "RemoveContainer" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.717387 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c6zv"] Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.737245 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2c6zv"] Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.738120 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.762418 4735 scope.go:117] "RemoveContainer" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.786658 4735 scope.go:117] "RemoveContainer" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.805899 4735 scope.go:117] "RemoveContainer" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.828185 4735 scope.go:117] "RemoveContainer" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.863499 4735 scope.go:117] "RemoveContainer" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.886894 4735 scope.go:117] "RemoveContainer" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.922568 4735 scope.go:117] "RemoveContainer" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.937503 4735 scope.go:117] "RemoveContainer" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.951180 4735 scope.go:117] "RemoveContainer" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.951851 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": container with ID starting with bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489 not found: ID does not exist" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.951897 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} err="failed to get container status \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": rpc error: code = NotFound desc = could not find container \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": container with ID starting with bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.951922 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.952210 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": container with ID starting with bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0 not found: ID does not exist" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.952235 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} err="failed to get container status \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": rpc error: code = NotFound desc = could not find container \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": container with ID starting with bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.952253 4735 scope.go:117] "RemoveContainer" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.952574 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": container with ID starting with 88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0 not found: ID does not exist" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.952598 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} err="failed to get container status \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": rpc error: code = NotFound desc = could not find container \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": container with ID starting with 88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.952621 4735 scope.go:117] "RemoveContainer" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.953066 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": container with ID starting with a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e not found: ID does not exist" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.953098 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} err="failed to get container status \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": rpc error: code = NotFound desc = could not find container \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": container with ID starting with a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.953115 4735 scope.go:117] "RemoveContainer" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.953532 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": container with ID starting with b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09 not found: ID does not exist" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.953571 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} err="failed to get container status \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": rpc error: code = NotFound desc = could not find container \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": container with ID starting with b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.953599 4735 scope.go:117] "RemoveContainer" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.953984 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": container with ID starting with d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0 not found: ID does not exist" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.954033 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} err="failed to get container status \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": rpc error: code = NotFound desc = could not find container \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": container with ID starting with d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.954066 4735 scope.go:117] "RemoveContainer" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.954369 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": container with ID starting with 7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4 not found: ID does not exist" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.954415 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} err="failed to get container status \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": rpc error: code = NotFound desc = could not find container \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": container with ID starting with 7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.954497 4735 scope.go:117] "RemoveContainer" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.954999 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": container with ID starting with 714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69 not found: ID does not exist" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.955100 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} err="failed to get container status \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": rpc error: code = NotFound desc = could not find container \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": container with ID starting with 714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.955129 4735 scope.go:117] "RemoveContainer" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.955476 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": container with ID starting with aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445 not found: ID does not exist" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.955518 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} err="failed to get container status \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": rpc error: code = NotFound desc = could not find container \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": container with ID starting with aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.955544 4735 scope.go:117] "RemoveContainer" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: E0131 15:08:08.956554 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": container with ID starting with c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4 not found: ID does not exist" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.956591 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} err="failed to get container status \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": rpc error: code = NotFound desc = could not find container \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": container with ID starting with c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.956609 4735 scope.go:117] "RemoveContainer" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.956957 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} err="failed to get container status \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": rpc error: code = NotFound desc = could not find container \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": container with ID starting with bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.956997 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.957329 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} err="failed to get container status \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": rpc error: code = NotFound desc = could not find container \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": container with ID starting with bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.957365 4735 scope.go:117] "RemoveContainer" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958048 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} err="failed to get container status \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": rpc error: code = NotFound desc = could not find container \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": container with ID starting with 88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958083 4735 scope.go:117] "RemoveContainer" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958355 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} err="failed to get container status \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": rpc error: code = NotFound desc = could not find container \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": container with ID starting with a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958390 4735 scope.go:117] "RemoveContainer" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958661 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} err="failed to get container status \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": rpc error: code = NotFound desc = could not find container \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": container with ID starting with b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958697 4735 scope.go:117] "RemoveContainer" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958947 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} err="failed to get container status \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": rpc error: code = NotFound desc = could not find container \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": container with ID starting with d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.958982 4735 scope.go:117] "RemoveContainer" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959221 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} err="failed to get container status \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": rpc error: code = NotFound desc = could not find container \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": container with ID starting with 7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959256 4735 scope.go:117] "RemoveContainer" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959523 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} err="failed to get container status \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": rpc error: code = NotFound desc = could not find container \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": container with ID starting with 714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959560 4735 scope.go:117] "RemoveContainer" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959822 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} err="failed to get container status \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": rpc error: code = NotFound desc = could not find container \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": container with ID starting with aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.959854 4735 scope.go:117] "RemoveContainer" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.960111 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} err="failed to get container status \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": rpc error: code = NotFound desc = could not find container \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": container with ID starting with c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.960145 4735 scope.go:117] "RemoveContainer" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.960599 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} err="failed to get container status \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": rpc error: code = NotFound desc = could not find container \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": container with ID starting with bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.960631 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961016 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} err="failed to get container status \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": rpc error: code = NotFound desc = could not find container \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": container with ID starting with bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961051 4735 scope.go:117] "RemoveContainer" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961634 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} err="failed to get container status \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": rpc error: code = NotFound desc = could not find container \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": container with ID starting with 88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961668 4735 scope.go:117] "RemoveContainer" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961967 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} err="failed to get container status \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": rpc error: code = NotFound desc = could not find container \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": container with ID starting with a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.961998 4735 scope.go:117] "RemoveContainer" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.962397 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} err="failed to get container status \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": rpc error: code = NotFound desc = could not find container \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": container with ID starting with b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.962453 4735 scope.go:117] "RemoveContainer" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.963034 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} err="failed to get container status \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": rpc error: code = NotFound desc = could not find container \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": container with ID starting with d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.963081 4735 scope.go:117] "RemoveContainer" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964068 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} err="failed to get container status \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": rpc error: code = NotFound desc = could not find container \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": container with ID starting with 7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964107 4735 scope.go:117] "RemoveContainer" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964384 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} err="failed to get container status \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": rpc error: code = NotFound desc = could not find container \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": container with ID starting with 714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964440 4735 scope.go:117] "RemoveContainer" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964907 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} err="failed to get container status \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": rpc error: code = NotFound desc = could not find container \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": container with ID starting with aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.964938 4735 scope.go:117] "RemoveContainer" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965244 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} err="failed to get container status \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": rpc error: code = NotFound desc = could not find container \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": container with ID starting with c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965277 4735 scope.go:117] "RemoveContainer" containerID="bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965621 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489"} err="failed to get container status \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": rpc error: code = NotFound desc = could not find container \"bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489\": container with ID starting with bbf9265668af1ab1c5e2a0844c577a91e05cbaa0ef7b9979656e377525354489 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965652 4735 scope.go:117] "RemoveContainer" containerID="bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965973 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0"} err="failed to get container status \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": rpc error: code = NotFound desc = could not find container \"bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0\": container with ID starting with bcb264c60cb53a885f836543207586d7c978ed5a0ba4c1e15e8823d3038112d0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.965999 4735 scope.go:117] "RemoveContainer" containerID="88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966274 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0"} err="failed to get container status \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": rpc error: code = NotFound desc = could not find container \"88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0\": container with ID starting with 88242dac7ca3603f11f57a3bc9eebf06b161d5d271acccda99c840652413e5b0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966458 4735 scope.go:117] "RemoveContainer" containerID="a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966705 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e"} err="failed to get container status \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": rpc error: code = NotFound desc = could not find container \"a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e\": container with ID starting with a884e77509b020d07a87c6e31c8b37717180de9902c47ffb3601cde9f657af4e not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966727 4735 scope.go:117] "RemoveContainer" containerID="b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966966 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09"} err="failed to get container status \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": rpc error: code = NotFound desc = could not find container \"b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09\": container with ID starting with b075fb6c47d0dceed625f9d0df5498cb0b55f7df08b35150d79335bfa96efe09 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.966987 4735 scope.go:117] "RemoveContainer" containerID="d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967328 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0"} err="failed to get container status \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": rpc error: code = NotFound desc = could not find container \"d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0\": container with ID starting with d839db0dd6aea2640849a90d54df428770471d9ee2cc341853fd8621d4e4d1a0 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967352 4735 scope.go:117] "RemoveContainer" containerID="7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967580 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4"} err="failed to get container status \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": rpc error: code = NotFound desc = could not find container \"7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4\": container with ID starting with 7a3a0ab267eed87ef0d6a47226bae26a65856e498a7f3ec9edc984ff31210fc4 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967602 4735 scope.go:117] "RemoveContainer" containerID="714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967849 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69"} err="failed to get container status \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": rpc error: code = NotFound desc = could not find container \"714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69\": container with ID starting with 714239c693d11c7740a96977faa19d760d879000068de6fa3a1f86dca259ac69 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.967870 4735 scope.go:117] "RemoveContainer" containerID="aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.968089 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445"} err="failed to get container status \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": rpc error: code = NotFound desc = could not find container \"aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445\": container with ID starting with aaff6cbf0f74a514e57899635dbc824eac6dc82e60b979124f416c177b5ab445 not found: ID does not exist" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.968110 4735 scope.go:117] "RemoveContainer" containerID="c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4" Jan 31 15:08:08 crc kubenswrapper[4735]: I0131 15:08:08.968336 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4"} err="failed to get container status \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": rpc error: code = NotFound desc = could not find container \"c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4\": container with ID starting with c8e0b1f1cd2906aa56cc62763c11a704390b6d222f8c3e087454a94582e24fe4 not found: ID does not exist" Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.549603 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c86d4a-441f-4e3b-be28-632dadd81e81" path="/var/lib/kubelet/pods/b0c86d4a-441f-4e3b-be28-632dadd81e81/volumes" Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.601799 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/2.log" Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607442 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"6d8bff09c43b3d548e0f1b7653ee312f1db515aac22da21ccd653169c279ba13"} Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"ecc24b5f74f66122f5ba4b4611968134f24eb9d4773a7a8928374eb8a96ce568"} Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607494 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"3d09e20ce550ac4c40e3d8b6ef165c59b4c3ff6019c2a0c8c76302557e469462"} Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607505 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"904e9c3188e58eea762eb2ed3be0ae27a5f5371ff1be1035b4cc6346719d85db"} Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607519 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"d1745044faec318a70f7ba28b2bca24f87bb3626841cd10ed14acc07154a8d6f"} Jan 31 15:08:09 crc kubenswrapper[4735]: I0131 15:08:09.607530 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"64e8939355e6bfb821e4d897a80bad87eedbad775753e1fc26e3104955041203"} Jan 31 15:08:12 crc kubenswrapper[4735]: I0131 15:08:12.640743 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"9a5f2109a186bd3ddc8c5c05ad09ade22c0573fac2137e2247f9b19010133750"} Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.656924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" event={"ID":"e9f6fa2f-1f29-4232-af0d-b11e46ea70bd","Type":"ContainerStarted","Data":"d0c1ef0bdc32ff24c3d8d53187b977ee609b177d560850f3412865d3eaa643ee"} Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.657250 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.657262 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.657273 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.689158 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.694517 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" podStartSLOduration=7.694498726 podStartE2EDuration="7.694498726s" podCreationTimestamp="2026-01-31 15:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:08:14.691756198 +0000 UTC m=+580.465085270" watchObservedRunningTime="2026-01-31 15:08:14.694498726 +0000 UTC m=+580.467827808" Jan 31 15:08:14 crc kubenswrapper[4735]: I0131 15:08:14.753244 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:21 crc kubenswrapper[4735]: I0131 15:08:21.540837 4735 scope.go:117] "RemoveContainer" containerID="c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7" Jan 31 15:08:21 crc kubenswrapper[4735]: E0131 15:08:21.541917 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hg7gl_openshift-multus(671e4f66-1c2f-436a-800d-fd3840e9830d)\"" pod="openshift-multus/multus-hg7gl" podUID="671e4f66-1c2f-436a-800d-fd3840e9830d" Jan 31 15:08:32 crc kubenswrapper[4735]: I0131 15:08:32.540516 4735 scope.go:117] "RemoveContainer" containerID="c1c5125409d48de418d5a9baea5487ec092213055f98e22241ab39e982f705e7" Jan 31 15:08:32 crc kubenswrapper[4735]: I0131 15:08:32.777982 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hg7gl_671e4f66-1c2f-436a-800d-fd3840e9830d/kube-multus/2.log" Jan 31 15:08:32 crc kubenswrapper[4735]: I0131 15:08:32.778513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hg7gl" event={"ID":"671e4f66-1c2f-436a-800d-fd3840e9830d","Type":"ContainerStarted","Data":"6d8b6ec50662b0193772bbb04c89b1b748fc20de54e0a81f9f0e6454781f8093"} Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.346793 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.347396 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.347524 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.348283 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.348383 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377" gracePeriod=600 Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.811593 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377" exitCode=0 Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.811630 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377"} Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.811936 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638"} Jan 31 15:08:37 crc kubenswrapper[4735]: I0131 15:08:37.811959 4735 scope.go:117] "RemoveContainer" containerID="4073e7cafbb3ca5b97140cff79933f518d6f956d53635427c5452e77e2fd3271" Jan 31 15:08:38 crc kubenswrapper[4735]: I0131 15:08:38.226913 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dt5mn" Jan 31 15:08:45 crc kubenswrapper[4735]: I0131 15:08:45.915706 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z"] Jan 31 15:08:45 crc kubenswrapper[4735]: I0131 15:08:45.917449 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:45 crc kubenswrapper[4735]: I0131 15:08:45.930970 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z"] Jan 31 15:08:45 crc kubenswrapper[4735]: I0131 15:08:45.940852 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.003037 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.003079 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6td\" (UniqueName: \"kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.003241 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.104283 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.104348 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k6td\" (UniqueName: \"kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.104389 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.105965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.106535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.139039 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k6td\" (UniqueName: \"kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.277081 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.555687 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z"] Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.880876 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerStarted","Data":"8d66e20b17cff1c0f991664e180aac7e52ec27059baaeaa0a0f8c66b25faa87b"} Jan 31 15:08:46 crc kubenswrapper[4735]: I0131 15:08:46.881258 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerStarted","Data":"d79b8727fa365dae812d1fa47a7d19bdd10cec982c524b6882e64cd5b39f3cfb"} Jan 31 15:08:47 crc kubenswrapper[4735]: I0131 15:08:47.889169 4735 generic.go:334] "Generic (PLEG): container finished" podID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerID="8d66e20b17cff1c0f991664e180aac7e52ec27059baaeaa0a0f8c66b25faa87b" exitCode=0 Jan 31 15:08:47 crc kubenswrapper[4735]: I0131 15:08:47.889226 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerDied","Data":"8d66e20b17cff1c0f991664e180aac7e52ec27059baaeaa0a0f8c66b25faa87b"} Jan 31 15:08:49 crc kubenswrapper[4735]: I0131 15:08:49.904297 4735 generic.go:334] "Generic (PLEG): container finished" podID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerID="01debb8c5973041b3e7cc344c0e66051a35e9ba32bdf2e909c06dc8fd5b5518a" exitCode=0 Jan 31 15:08:49 crc kubenswrapper[4735]: I0131 15:08:49.904766 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerDied","Data":"01debb8c5973041b3e7cc344c0e66051a35e9ba32bdf2e909c06dc8fd5b5518a"} Jan 31 15:08:50 crc kubenswrapper[4735]: I0131 15:08:50.914728 4735 generic.go:334] "Generic (PLEG): container finished" podID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerID="69a51c1c83ea411f91707f640ed04ec578c9ad8f488c2d9ee32f4d9e69a6923d" exitCode=0 Jan 31 15:08:50 crc kubenswrapper[4735]: I0131 15:08:50.914786 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerDied","Data":"69a51c1c83ea411f91707f640ed04ec578c9ad8f488c2d9ee32f4d9e69a6923d"} Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.194596 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.395309 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k6td\" (UniqueName: \"kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td\") pod \"691e8099-7fa0-4462-857f-6bbfab6502bc\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.395354 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util\") pod \"691e8099-7fa0-4462-857f-6bbfab6502bc\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.395437 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle\") pod \"691e8099-7fa0-4462-857f-6bbfab6502bc\" (UID: \"691e8099-7fa0-4462-857f-6bbfab6502bc\") " Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.396647 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle" (OuterVolumeSpecName: "bundle") pod "691e8099-7fa0-4462-857f-6bbfab6502bc" (UID: "691e8099-7fa0-4462-857f-6bbfab6502bc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.405702 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td" (OuterVolumeSpecName: "kube-api-access-4k6td") pod "691e8099-7fa0-4462-857f-6bbfab6502bc" (UID: "691e8099-7fa0-4462-857f-6bbfab6502bc"). InnerVolumeSpecName "kube-api-access-4k6td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.479396 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util" (OuterVolumeSpecName: "util") pod "691e8099-7fa0-4462-857f-6bbfab6502bc" (UID: "691e8099-7fa0-4462-857f-6bbfab6502bc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.499174 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.499279 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k6td\" (UniqueName: \"kubernetes.io/projected/691e8099-7fa0-4462-857f-6bbfab6502bc-kube-api-access-4k6td\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.499300 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/691e8099-7fa0-4462-857f-6bbfab6502bc-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.929014 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" event={"ID":"691e8099-7fa0-4462-857f-6bbfab6502bc","Type":"ContainerDied","Data":"d79b8727fa365dae812d1fa47a7d19bdd10cec982c524b6882e64cd5b39f3cfb"} Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.929061 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79b8727fa365dae812d1fa47a7d19bdd10cec982c524b6882e64cd5b39f3cfb" Jan 31 15:08:52 crc kubenswrapper[4735]: I0131 15:08:52.929127 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.520248 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9rvf2"] Jan 31 15:08:57 crc kubenswrapper[4735]: E0131 15:08:57.521047 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="util" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.521061 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="util" Jan 31 15:08:57 crc kubenswrapper[4735]: E0131 15:08:57.521075 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="extract" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.521083 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="extract" Jan 31 15:08:57 crc kubenswrapper[4735]: E0131 15:08:57.521105 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="pull" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.521114 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="pull" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.521235 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="691e8099-7fa0-4462-857f-6bbfab6502bc" containerName="extract" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.521668 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.523460 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.523703 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bnzfq" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.524442 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.573991 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9rvf2"] Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.664518 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9vj\" (UniqueName: \"kubernetes.io/projected/a11f4f28-7d8c-439b-9e8a-903060113cf4-kube-api-access-cf9vj\") pod \"nmstate-operator-646758c888-9rvf2\" (UID: \"a11f4f28-7d8c-439b-9e8a-903060113cf4\") " pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.765359 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9vj\" (UniqueName: \"kubernetes.io/projected/a11f4f28-7d8c-439b-9e8a-903060113cf4-kube-api-access-cf9vj\") pod \"nmstate-operator-646758c888-9rvf2\" (UID: \"a11f4f28-7d8c-439b-9e8a-903060113cf4\") " pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.794393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9vj\" (UniqueName: \"kubernetes.io/projected/a11f4f28-7d8c-439b-9e8a-903060113cf4-kube-api-access-cf9vj\") pod \"nmstate-operator-646758c888-9rvf2\" (UID: \"a11f4f28-7d8c-439b-9e8a-903060113cf4\") " pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" Jan 31 15:08:57 crc kubenswrapper[4735]: I0131 15:08:57.858674 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" Jan 31 15:08:58 crc kubenswrapper[4735]: I0131 15:08:58.111012 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9rvf2"] Jan 31 15:08:58 crc kubenswrapper[4735]: I0131 15:08:58.970534 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" event={"ID":"a11f4f28-7d8c-439b-9e8a-903060113cf4","Type":"ContainerStarted","Data":"d0f16f9b30f7cf57e993e6995ba52d00e33d82e6646e68c68b85dede50ca1720"} Jan 31 15:09:00 crc kubenswrapper[4735]: I0131 15:09:00.985279 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" event={"ID":"a11f4f28-7d8c-439b-9e8a-903060113cf4","Type":"ContainerStarted","Data":"7a6f2fc5fbb4e6b8f43e94173cf309170810e09621d2390449defaf5c9e1f883"} Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.204103 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9rvf2" podStartSLOduration=6.9526695499999995 podStartE2EDuration="9.204085594s" podCreationTimestamp="2026-01-31 15:08:57 +0000 UTC" firstStartedPulling="2026-01-31 15:08:58.133364019 +0000 UTC m=+623.906693091" lastFinishedPulling="2026-01-31 15:09:00.384780083 +0000 UTC m=+626.158109135" observedRunningTime="2026-01-31 15:09:01.007272365 +0000 UTC m=+626.780601477" watchObservedRunningTime="2026-01-31 15:09:06.204085594 +0000 UTC m=+631.977414636" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.208621 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6cbvj"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.209609 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.213494 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-h5vwq" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.225570 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.226170 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.228654 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.249711 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hs75p"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.250310 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.283766 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6cbvj"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.300563 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.345244 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.346059 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.348151 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.348169 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.348327 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-czf7t" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.361139 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385322 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw6v\" (UniqueName: \"kubernetes.io/projected/cd12d118-a925-4765-a3c4-38e34aa3c548-kube-api-access-ppw6v\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385375 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skmnm\" (UniqueName: \"kubernetes.io/projected/c18ba473-8399-4059-a6c4-22990f6e1cfe-kube-api-access-skmnm\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385397 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-nmstate-lock\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385414 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-ovs-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385452 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385473 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-dbus-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385493 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9cd\" (UniqueName: \"kubernetes.io/projected/69c76992-f3a3-4e9a-bc71-0eb6a7852b6e-kube-api-access-qd9cd\") pod \"nmstate-metrics-54757c584b-6cbvj\" (UID: \"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385519 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecdc44de-8e1b-477a-860f-780a279594cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd12d118-a925-4765-a3c4-38e34aa3c548-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.385559 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkgs\" (UniqueName: \"kubernetes.io/projected/ecdc44de-8e1b-477a-860f-780a279594cc-kube-api-access-zhkgs\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486560 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skmnm\" (UniqueName: \"kubernetes.io/projected/c18ba473-8399-4059-a6c4-22990f6e1cfe-kube-api-access-skmnm\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486609 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-nmstate-lock\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486628 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-ovs-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486646 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486669 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-dbus-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486676 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-nmstate-lock\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486689 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9cd\" (UniqueName: \"kubernetes.io/projected/69c76992-f3a3-4e9a-bc71-0eb6a7852b6e-kube-api-access-qd9cd\") pod \"nmstate-metrics-54757c584b-6cbvj\" (UID: \"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486727 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecdc44de-8e1b-477a-860f-780a279594cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486756 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd12d118-a925-4765-a3c4-38e34aa3c548-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486752 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-ovs-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486774 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkgs\" (UniqueName: \"kubernetes.io/projected/ecdc44de-8e1b-477a-860f-780a279594cc-kube-api-access-zhkgs\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: E0131 15:09:06.486887 4735 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.486907 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppw6v\" (UniqueName: \"kubernetes.io/projected/cd12d118-a925-4765-a3c4-38e34aa3c548-kube-api-access-ppw6v\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: E0131 15:09:06.486927 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert podName:ecdc44de-8e1b-477a-860f-780a279594cc nodeName:}" failed. No retries permitted until 2026-01-31 15:09:06.986912488 +0000 UTC m=+632.760241530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-4r6xg" (UID: "ecdc44de-8e1b-477a-860f-780a279594cc") : secret "plugin-serving-cert" not found Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.487145 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c18ba473-8399-4059-a6c4-22990f6e1cfe-dbus-socket\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.487619 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ecdc44de-8e1b-477a-860f-780a279594cc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.499286 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd12d118-a925-4765-a3c4-38e34aa3c548-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.504594 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skmnm\" (UniqueName: \"kubernetes.io/projected/c18ba473-8399-4059-a6c4-22990f6e1cfe-kube-api-access-skmnm\") pod \"nmstate-handler-hs75p\" (UID: \"c18ba473-8399-4059-a6c4-22990f6e1cfe\") " pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.510124 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9cd\" (UniqueName: \"kubernetes.io/projected/69c76992-f3a3-4e9a-bc71-0eb6a7852b6e-kube-api-access-qd9cd\") pod \"nmstate-metrics-54757c584b-6cbvj\" (UID: \"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.513077 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppw6v\" (UniqueName: \"kubernetes.io/projected/cd12d118-a925-4765-a3c4-38e34aa3c548-kube-api-access-ppw6v\") pod \"nmstate-webhook-8474b5b9d8-lztrk\" (UID: \"cd12d118-a925-4765-a3c4-38e34aa3c548\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.520221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhkgs\" (UniqueName: \"kubernetes.io/projected/ecdc44de-8e1b-477a-860f-780a279594cc-kube-api-access-zhkgs\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.525493 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.542693 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.551768 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bc6845dd9-t7dbd"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.552609 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.562480 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.564391 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc6845dd9-t7dbd"] Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588394 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-trusted-ca-bundle\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588463 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-oauth-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-console-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588522 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-oauth-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588583 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588619 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9q97\" (UniqueName: \"kubernetes.io/projected/fa62273d-b1ba-425f-bfc9-7191b716e335-kube-api-access-b9q97\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.588644 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-service-ca\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: W0131 15:09:06.612677 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc18ba473_8399_4059_a6c4_22990f6e1cfe.slice/crio-5475c821d2ec8669b347bedc4a7b3d1fc7f60db10cc49938e365cd64f019d823 WatchSource:0}: Error finding container 5475c821d2ec8669b347bedc4a7b3d1fc7f60db10cc49938e365cd64f019d823: Status 404 returned error can't find the container with id 5475c821d2ec8669b347bedc4a7b3d1fc7f60db10cc49938e365cd64f019d823 Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693453 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-oauth-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693516 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693553 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9q97\" (UniqueName: \"kubernetes.io/projected/fa62273d-b1ba-425f-bfc9-7191b716e335-kube-api-access-b9q97\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693580 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-service-ca\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693646 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-trusted-ca-bundle\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693683 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-oauth-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.693704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-console-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.694401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-service-ca\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.694544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-oauth-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.695842 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-console-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.698244 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-serving-cert\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.698264 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa62273d-b1ba-425f-bfc9-7191b716e335-console-oauth-config\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.700651 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa62273d-b1ba-425f-bfc9-7191b716e335-trusted-ca-bundle\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.712488 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9q97\" (UniqueName: \"kubernetes.io/projected/fa62273d-b1ba-425f-bfc9-7191b716e335-kube-api-access-b9q97\") pod \"console-bc6845dd9-t7dbd\" (UID: \"fa62273d-b1ba-425f-bfc9-7191b716e335\") " pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.716035 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6cbvj"] Jan 31 15:09:06 crc kubenswrapper[4735]: W0131 15:09:06.725724 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69c76992_f3a3_4e9a_bc71_0eb6a7852b6e.slice/crio-68235a6cafe895c64cc34094488468f29bd6062b6fd42f6d93c4e13dbd87381b WatchSource:0}: Error finding container 68235a6cafe895c64cc34094488468f29bd6062b6fd42f6d93c4e13dbd87381b: Status 404 returned error can't find the container with id 68235a6cafe895c64cc34094488468f29bd6062b6fd42f6d93c4e13dbd87381b Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.763712 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk"] Jan 31 15:09:06 crc kubenswrapper[4735]: W0131 15:09:06.766005 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd12d118_a925_4765_a3c4_38e34aa3c548.slice/crio-9eec3ca6eda360a9e31d43296895aa53155cedcfb75fb0151842f634fda67dea WatchSource:0}: Error finding container 9eec3ca6eda360a9e31d43296895aa53155cedcfb75fb0151842f634fda67dea: Status 404 returned error can't find the container with id 9eec3ca6eda360a9e31d43296895aa53155cedcfb75fb0151842f634fda67dea Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.902291 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:06 crc kubenswrapper[4735]: I0131 15:09:06.997374 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.001559 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecdc44de-8e1b-477a-860f-780a279594cc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-4r6xg\" (UID: \"ecdc44de-8e1b-477a-860f-780a279594cc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.022665 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" event={"ID":"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e","Type":"ContainerStarted","Data":"68235a6cafe895c64cc34094488468f29bd6062b6fd42f6d93c4e13dbd87381b"} Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.023795 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" event={"ID":"cd12d118-a925-4765-a3c4-38e34aa3c548","Type":"ContainerStarted","Data":"9eec3ca6eda360a9e31d43296895aa53155cedcfb75fb0151842f634fda67dea"} Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.025088 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hs75p" event={"ID":"c18ba473-8399-4059-a6c4-22990f6e1cfe","Type":"ContainerStarted","Data":"5475c821d2ec8669b347bedc4a7b3d1fc7f60db10cc49938e365cd64f019d823"} Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.108993 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc6845dd9-t7dbd"] Jan 31 15:09:07 crc kubenswrapper[4735]: W0131 15:09:07.116783 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa62273d_b1ba_425f_bfc9_7191b716e335.slice/crio-82c37cf12c05ab1b60e5075b7c18bfa469b980e64052d80c0f2a843e86a20878 WatchSource:0}: Error finding container 82c37cf12c05ab1b60e5075b7c18bfa469b980e64052d80c0f2a843e86a20878: Status 404 returned error can't find the container with id 82c37cf12c05ab1b60e5075b7c18bfa469b980e64052d80c0f2a843e86a20878 Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.272476 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" Jan 31 15:09:07 crc kubenswrapper[4735]: I0131 15:09:07.476498 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg"] Jan 31 15:09:07 crc kubenswrapper[4735]: W0131 15:09:07.481947 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecdc44de_8e1b_477a_860f_780a279594cc.slice/crio-e08386da30944f18275b7c289bac64d487db8c886376a82c370ce31b666ded84 WatchSource:0}: Error finding container e08386da30944f18275b7c289bac64d487db8c886376a82c370ce31b666ded84: Status 404 returned error can't find the container with id e08386da30944f18275b7c289bac64d487db8c886376a82c370ce31b666ded84 Jan 31 15:09:08 crc kubenswrapper[4735]: I0131 15:09:08.035721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" event={"ID":"ecdc44de-8e1b-477a-860f-780a279594cc","Type":"ContainerStarted","Data":"e08386da30944f18275b7c289bac64d487db8c886376a82c370ce31b666ded84"} Jan 31 15:09:08 crc kubenswrapper[4735]: I0131 15:09:08.038566 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc6845dd9-t7dbd" event={"ID":"fa62273d-b1ba-425f-bfc9-7191b716e335","Type":"ContainerStarted","Data":"443d6eace593911ea281450a11070413023420343b187174136a1e774510c295"} Jan 31 15:09:08 crc kubenswrapper[4735]: I0131 15:09:08.038629 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc6845dd9-t7dbd" event={"ID":"fa62273d-b1ba-425f-bfc9-7191b716e335","Type":"ContainerStarted","Data":"82c37cf12c05ab1b60e5075b7c18bfa469b980e64052d80c0f2a843e86a20878"} Jan 31 15:09:08 crc kubenswrapper[4735]: I0131 15:09:08.066640 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bc6845dd9-t7dbd" podStartSLOduration=2.06661439 podStartE2EDuration="2.06661439s" podCreationTimestamp="2026-01-31 15:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:09:08.061853975 +0000 UTC m=+633.835183047" watchObservedRunningTime="2026-01-31 15:09:08.06661439 +0000 UTC m=+633.839943462" Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.053949 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" event={"ID":"cd12d118-a925-4765-a3c4-38e34aa3c548","Type":"ContainerStarted","Data":"515babd982ccb782e2a63231f8bf8a96d492237bbafd74aaf5a1b2eca86b97ff"} Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.056047 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.056116 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hs75p" event={"ID":"c18ba473-8399-4059-a6c4-22990f6e1cfe","Type":"ContainerStarted","Data":"8273d68dd32f8f64b129490f2ea2f76f48193a60a75de950c3a686bd93959d62"} Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.056176 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.057079 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" event={"ID":"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e","Type":"ContainerStarted","Data":"95eda56f5c09fdd071837bb7fb5b8b24f0dd5851c665e401e2bf8de546010808"} Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.069956 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" podStartSLOduration=1.430351589 podStartE2EDuration="4.069938521s" podCreationTimestamp="2026-01-31 15:09:06 +0000 UTC" firstStartedPulling="2026-01-31 15:09:06.767916779 +0000 UTC m=+632.541245821" lastFinishedPulling="2026-01-31 15:09:09.407503701 +0000 UTC m=+635.180832753" observedRunningTime="2026-01-31 15:09:10.069143649 +0000 UTC m=+635.842472711" watchObservedRunningTime="2026-01-31 15:09:10.069938521 +0000 UTC m=+635.843267563" Jan 31 15:09:10 crc kubenswrapper[4735]: I0131 15:09:10.087337 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hs75p" podStartSLOduration=1.293788304 podStartE2EDuration="4.087321102s" podCreationTimestamp="2026-01-31 15:09:06 +0000 UTC" firstStartedPulling="2026-01-31 15:09:06.614800157 +0000 UTC m=+632.388129199" lastFinishedPulling="2026-01-31 15:09:09.408332945 +0000 UTC m=+635.181661997" observedRunningTime="2026-01-31 15:09:10.083615937 +0000 UTC m=+635.856944979" watchObservedRunningTime="2026-01-31 15:09:10.087321102 +0000 UTC m=+635.860650144" Jan 31 15:09:11 crc kubenswrapper[4735]: I0131 15:09:11.067707 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" event={"ID":"ecdc44de-8e1b-477a-860f-780a279594cc","Type":"ContainerStarted","Data":"ef4409026392e0d8c367fb9d667f75ca6301251a6933766fd440923df1bda7f9"} Jan 31 15:09:11 crc kubenswrapper[4735]: I0131 15:09:11.092885 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-4r6xg" podStartSLOduration=2.203899645 podStartE2EDuration="5.092869246s" podCreationTimestamp="2026-01-31 15:09:06 +0000 UTC" firstStartedPulling="2026-01-31 15:09:07.484759725 +0000 UTC m=+633.258088757" lastFinishedPulling="2026-01-31 15:09:10.373729316 +0000 UTC m=+636.147058358" observedRunningTime="2026-01-31 15:09:11.087045762 +0000 UTC m=+636.860374804" watchObservedRunningTime="2026-01-31 15:09:11.092869246 +0000 UTC m=+636.866198278" Jan 31 15:09:13 crc kubenswrapper[4735]: I0131 15:09:13.084531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" event={"ID":"69c76992-f3a3-4e9a-bc71-0eb6a7852b6e","Type":"ContainerStarted","Data":"9f5fa9a1a60e693b3c993b4f2ad48ff745e7ffbe0e88e8ad31212e9825811bce"} Jan 31 15:09:13 crc kubenswrapper[4735]: I0131 15:09:13.132229 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-6cbvj" podStartSLOduration=1.892514185 podStartE2EDuration="7.132209695s" podCreationTimestamp="2026-01-31 15:09:06 +0000 UTC" firstStartedPulling="2026-01-31 15:09:06.729598507 +0000 UTC m=+632.502927549" lastFinishedPulling="2026-01-31 15:09:11.969294017 +0000 UTC m=+637.742623059" observedRunningTime="2026-01-31 15:09:13.105232083 +0000 UTC m=+638.878561195" watchObservedRunningTime="2026-01-31 15:09:13.132209695 +0000 UTC m=+638.905538747" Jan 31 15:09:16 crc kubenswrapper[4735]: I0131 15:09:16.601102 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hs75p" Jan 31 15:09:16 crc kubenswrapper[4735]: I0131 15:09:16.903198 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:16 crc kubenswrapper[4735]: I0131 15:09:16.903259 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:16 crc kubenswrapper[4735]: I0131 15:09:16.911670 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:17 crc kubenswrapper[4735]: I0131 15:09:17.119966 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bc6845dd9-t7dbd" Jan 31 15:09:17 crc kubenswrapper[4735]: I0131 15:09:17.197276 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:09:26 crc kubenswrapper[4735]: I0131 15:09:26.552801 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lztrk" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.659150 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd"] Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.662604 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.666486 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.677158 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd"] Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.720370 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2888l\" (UniqueName: \"kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.720454 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.720534 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.822163 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.822308 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2888l\" (UniqueName: \"kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.822403 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.823037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.823531 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.856085 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2888l\" (UniqueName: \"kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:39 crc kubenswrapper[4735]: I0131 15:09:39.995987 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:40 crc kubenswrapper[4735]: I0131 15:09:40.263558 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd"] Jan 31 15:09:40 crc kubenswrapper[4735]: I0131 15:09:40.695811 4735 generic.go:334] "Generic (PLEG): container finished" podID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerID="6e6a63969a6ecdca3b89bd4d6da05313b9927fa9007f772a892880e9800c02c1" exitCode=0 Jan 31 15:09:40 crc kubenswrapper[4735]: I0131 15:09:40.695872 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" event={"ID":"19a8ca6f-2c54-4006-8e89-4aa7bde7e254","Type":"ContainerDied","Data":"6e6a63969a6ecdca3b89bd4d6da05313b9927fa9007f772a892880e9800c02c1"} Jan 31 15:09:40 crc kubenswrapper[4735]: I0131 15:09:40.695909 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" event={"ID":"19a8ca6f-2c54-4006-8e89-4aa7bde7e254","Type":"ContainerStarted","Data":"f7568b6a9e08fc986c09200883dfa10e8f039b04931695792893693c8ccc4332"} Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.272117 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kkjjj" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerName="console" containerID="cri-o://0720891a1df7789e582756a0cff91f654f9f3c93704ac86f97123dbc0908be67" gracePeriod=15 Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.711091 4735 generic.go:334] "Generic (PLEG): container finished" podID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerID="157c0c498ef698b35945b4820252e5bcc069bff018316e60c56796d2e5c8abca" exitCode=0 Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.711171 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" event={"ID":"19a8ca6f-2c54-4006-8e89-4aa7bde7e254","Type":"ContainerDied","Data":"157c0c498ef698b35945b4820252e5bcc069bff018316e60c56796d2e5c8abca"} Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.716865 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kkjjj_84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07/console/0.log" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.716930 4735 generic.go:334] "Generic (PLEG): container finished" podID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerID="0720891a1df7789e582756a0cff91f654f9f3c93704ac86f97123dbc0908be67" exitCode=2 Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.716964 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkjjj" event={"ID":"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07","Type":"ContainerDied","Data":"0720891a1df7789e582756a0cff91f654f9f3c93704ac86f97123dbc0908be67"} Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.716995 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kkjjj" event={"ID":"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07","Type":"ContainerDied","Data":"35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b"} Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.717012 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d75e79cd976359fc7cc6afeefaab5be1190013c839eeee91c3a67aef8eda7b" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.721141 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kkjjj_84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07/console/0.log" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.721208 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873680 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873770 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873812 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6t74\" (UniqueName: \"kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873866 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873907 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873935 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.873970 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca\") pod \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\" (UID: \"84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07\") " Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.875250 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config" (OuterVolumeSpecName: "console-config") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.875314 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca" (OuterVolumeSpecName: "service-ca") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.875315 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.875778 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.881623 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.886827 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74" (OuterVolumeSpecName: "kube-api-access-q6t74") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "kube-api-access-q6t74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.889414 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" (UID: "84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976056 4735 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976116 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976137 4735 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976157 4735 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976176 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6t74\" (UniqueName: \"kubernetes.io/projected/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-kube-api-access-q6t74\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976197 4735 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:42 crc kubenswrapper[4735]: I0131 15:09:42.976216 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:43 crc kubenswrapper[4735]: I0131 15:09:43.728012 4735 generic.go:334] "Generic (PLEG): container finished" podID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerID="9500d18c64ef4a122bf7454179db0a7c5d5b184e5f0d7657b53ee443c048fe19" exitCode=0 Jan 31 15:09:43 crc kubenswrapper[4735]: I0131 15:09:43.728146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" event={"ID":"19a8ca6f-2c54-4006-8e89-4aa7bde7e254","Type":"ContainerDied","Data":"9500d18c64ef4a122bf7454179db0a7c5d5b184e5f0d7657b53ee443c048fe19"} Jan 31 15:09:43 crc kubenswrapper[4735]: I0131 15:09:43.728173 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kkjjj" Jan 31 15:09:43 crc kubenswrapper[4735]: I0131 15:09:43.776568 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:09:43 crc kubenswrapper[4735]: I0131 15:09:43.780612 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kkjjj"] Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.012476 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.205815 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util\") pod \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.205972 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2888l\" (UniqueName: \"kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l\") pod \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.206119 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle\") pod \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\" (UID: \"19a8ca6f-2c54-4006-8e89-4aa7bde7e254\") " Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.208270 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle" (OuterVolumeSpecName: "bundle") pod "19a8ca6f-2c54-4006-8e89-4aa7bde7e254" (UID: "19a8ca6f-2c54-4006-8e89-4aa7bde7e254"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.213712 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l" (OuterVolumeSpecName: "kube-api-access-2888l") pod "19a8ca6f-2c54-4006-8e89-4aa7bde7e254" (UID: "19a8ca6f-2c54-4006-8e89-4aa7bde7e254"). InnerVolumeSpecName "kube-api-access-2888l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.248367 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util" (OuterVolumeSpecName: "util") pod "19a8ca6f-2c54-4006-8e89-4aa7bde7e254" (UID: "19a8ca6f-2c54-4006-8e89-4aa7bde7e254"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.307331 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.307383 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.307403 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2888l\" (UniqueName: \"kubernetes.io/projected/19a8ca6f-2c54-4006-8e89-4aa7bde7e254-kube-api-access-2888l\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.552748 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" path="/var/lib/kubelet/pods/84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07/volumes" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.745537 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" event={"ID":"19a8ca6f-2c54-4006-8e89-4aa7bde7e254","Type":"ContainerDied","Data":"f7568b6a9e08fc986c09200883dfa10e8f039b04931695792893693c8ccc4332"} Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.745590 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7568b6a9e08fc986c09200883dfa10e8f039b04931695792893693c8ccc4332" Jan 31 15:09:45 crc kubenswrapper[4735]: I0131 15:09:45.745716 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.556784 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb"] Jan 31 15:09:54 crc kubenswrapper[4735]: E0131 15:09:54.557555 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557571 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4735]: E0131 15:09:54.557581 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557589 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4735]: E0131 15:09:54.557598 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerName="console" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557606 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerName="console" Jan 31 15:09:54 crc kubenswrapper[4735]: E0131 15:09:54.557625 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557631 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557733 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a8ca6f-2c54-4006-8e89-4aa7bde7e254" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.557746 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e5eeff-bbc9-4c8d-9b4d-b4e3cda36d07" containerName="console" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.558178 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.560495 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.560583 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.560846 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.561140 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-j4f95" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.562157 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.577077 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb"] Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.734783 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-apiservice-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.734848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-webhook-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.734916 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbc2\" (UniqueName: \"kubernetes.io/projected/0f2b4446-8543-4182-bf59-d1be74b899c9-kube-api-access-xkbc2\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.820040 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c"] Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.820970 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.823794 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.824283 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-88glz" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.825075 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.836648 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-apiservice-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.836699 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-webhook-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.836742 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbc2\" (UniqueName: \"kubernetes.io/projected/0f2b4446-8543-4182-bf59-d1be74b899c9-kube-api-access-xkbc2\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.840655 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c"] Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.864135 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-apiservice-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.865187 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f2b4446-8543-4182-bf59-d1be74b899c9-webhook-cert\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.865883 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbc2\" (UniqueName: \"kubernetes.io/projected/0f2b4446-8543-4182-bf59-d1be74b899c9-kube-api-access-xkbc2\") pod \"metallb-operator-controller-manager-866454bfd-gxbsb\" (UID: \"0f2b4446-8543-4182-bf59-d1be74b899c9\") " pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.872164 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.941041 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-apiservice-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.941125 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-webhook-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:54 crc kubenswrapper[4735]: I0131 15:09:54.941176 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5qz\" (UniqueName: \"kubernetes.io/projected/f84a6826-6439-4751-a46e-84a04759c021-kube-api-access-hx5qz\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.041982 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-apiservice-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.042044 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-webhook-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.042084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5qz\" (UniqueName: \"kubernetes.io/projected/f84a6826-6439-4751-a46e-84a04759c021-kube-api-access-hx5qz\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.048987 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-apiservice-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.056073 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f84a6826-6439-4751-a46e-84a04759c021-webhook-cert\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.079770 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5qz\" (UniqueName: \"kubernetes.io/projected/f84a6826-6439-4751-a46e-84a04759c021-kube-api-access-hx5qz\") pod \"metallb-operator-webhook-server-54d6fb8967-d6d4c\" (UID: \"f84a6826-6439-4751-a46e-84a04759c021\") " pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.139884 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.186098 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb"] Jan 31 15:09:55 crc kubenswrapper[4735]: W0131 15:09:55.197029 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f2b4446_8543_4182_bf59_d1be74b899c9.slice/crio-8292f6179988e5420f39221a0cc71579f8040b67c07751ed8430e64f8a4e8ac2 WatchSource:0}: Error finding container 8292f6179988e5420f39221a0cc71579f8040b67c07751ed8430e64f8a4e8ac2: Status 404 returned error can't find the container with id 8292f6179988e5420f39221a0cc71579f8040b67c07751ed8430e64f8a4e8ac2 Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.578156 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c"] Jan 31 15:09:55 crc kubenswrapper[4735]: W0131 15:09:55.589004 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf84a6826_6439_4751_a46e_84a04759c021.slice/crio-ef5115ba00d4e5d73a8d5af1e1571a23205eda33abafc663647563a649a6ac36 WatchSource:0}: Error finding container ef5115ba00d4e5d73a8d5af1e1571a23205eda33abafc663647563a649a6ac36: Status 404 returned error can't find the container with id ef5115ba00d4e5d73a8d5af1e1571a23205eda33abafc663647563a649a6ac36 Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.803064 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" event={"ID":"f84a6826-6439-4751-a46e-84a04759c021","Type":"ContainerStarted","Data":"ef5115ba00d4e5d73a8d5af1e1571a23205eda33abafc663647563a649a6ac36"} Jan 31 15:09:55 crc kubenswrapper[4735]: I0131 15:09:55.804486 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" event={"ID":"0f2b4446-8543-4182-bf59-d1be74b899c9","Type":"ContainerStarted","Data":"8292f6179988e5420f39221a0cc71579f8040b67c07751ed8430e64f8a4e8ac2"} Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.844592 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" event={"ID":"0f2b4446-8543-4182-bf59-d1be74b899c9","Type":"ContainerStarted","Data":"5da277403c2bfc96d762284ec02ac4d254c5fdf836ccb03b2789b9a28e41a81e"} Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.845065 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.848335 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" event={"ID":"f84a6826-6439-4751-a46e-84a04759c021","Type":"ContainerStarted","Data":"ea6feb48159db5e03c8e4ad57f09fa71845e3c371aea1d5dd91e8532fbd9fd32"} Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.848469 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.875222 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" podStartSLOduration=2.104145891 podStartE2EDuration="6.875195792s" podCreationTimestamp="2026-01-31 15:09:54 +0000 UTC" firstStartedPulling="2026-01-31 15:09:55.215902084 +0000 UTC m=+680.989231126" lastFinishedPulling="2026-01-31 15:09:59.986951985 +0000 UTC m=+685.760281027" observedRunningTime="2026-01-31 15:10:00.868789161 +0000 UTC m=+686.642118233" watchObservedRunningTime="2026-01-31 15:10:00.875195792 +0000 UTC m=+686.648524844" Jan 31 15:10:00 crc kubenswrapper[4735]: I0131 15:10:00.892509 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" podStartSLOduration=2.481757445 podStartE2EDuration="6.892483771s" podCreationTimestamp="2026-01-31 15:09:54 +0000 UTC" firstStartedPulling="2026-01-31 15:09:55.592670314 +0000 UTC m=+681.365999366" lastFinishedPulling="2026-01-31 15:10:00.00339665 +0000 UTC m=+685.776725692" observedRunningTime="2026-01-31 15:10:00.885901135 +0000 UTC m=+686.659230217" watchObservedRunningTime="2026-01-31 15:10:00.892483771 +0000 UTC m=+686.665812843" Jan 31 15:10:15 crc kubenswrapper[4735]: I0131 15:10:15.144082 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-54d6fb8967-d6d4c" Jan 31 15:10:34 crc kubenswrapper[4735]: I0131 15:10:34.876661 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-866454bfd-gxbsb" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.582031 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-w4xx8"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.585824 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.594242 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.596043 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.596356 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8zzv9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.598936 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.599540 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.601591 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.615221 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.683551 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v7cgt"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.684574 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.687269 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.687486 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.687572 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.687728 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sxcch" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.692276 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-j9l75"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.693082 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.695327 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.711302 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-j9l75"] Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.712878 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-sockets\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.712935 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441f5a71-b5fa-4f6f-a825-40eb055760a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713455 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-startup\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713500 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54z56\" (UniqueName: \"kubernetes.io/projected/441f5a71-b5fa-4f6f-a825-40eb055760a0-kube-api-access-54z56\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713526 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-conf\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics-certs\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713603 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wdk\" (UniqueName: \"kubernetes.io/projected/651b0e17-dd6c-438b-a213-f4fd1da48cae-kube-api-access-j2wdk\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.713689 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-reloader\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.810470 4735 scope.go:117] "RemoveContainer" containerID="0720891a1df7789e582756a0cff91f654f9f3c93704ac86f97123dbc0908be67" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814524 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814578 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-sockets\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814615 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441f5a71-b5fa-4f6f-a825-40eb055760a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814640 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-startup\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814678 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54z56\" (UniqueName: \"kubernetes.io/projected/441f5a71-b5fa-4f6f-a825-40eb055760a0-kube-api-access-54z56\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814696 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-conf\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814719 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics-certs\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814735 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lllj2\" (UniqueName: \"kubernetes.io/projected/38acb809-064d-43d6-8800-40cd1cf7f89a-kube-api-access-lllj2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814752 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wdk\" (UniqueName: \"kubernetes.io/projected/651b0e17-dd6c-438b-a213-f4fd1da48cae-kube-api-access-j2wdk\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814772 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-metrics-certs\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814789 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-cert\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814806 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-metrics-certs\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814823 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fsn5\" (UniqueName: \"kubernetes.io/projected/129f74a8-c107-4f72-9972-d2c81e811b93-kube-api-access-8fsn5\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814843 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814859 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/38acb809-064d-43d6-8800-40cd1cf7f89a-metallb-excludel2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.814876 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-reloader\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.815004 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-sockets\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.815187 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-conf\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.815281 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-reloader\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.815681 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/651b0e17-dd6c-438b-a213-f4fd1da48cae-frr-startup\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.816898 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.818791 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/651b0e17-dd6c-438b-a213-f4fd1da48cae-metrics-certs\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.819008 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/441f5a71-b5fa-4f6f-a825-40eb055760a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.835169 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54z56\" (UniqueName: \"kubernetes.io/projected/441f5a71-b5fa-4f6f-a825-40eb055760a0-kube-api-access-54z56\") pod \"frr-k8s-webhook-server-7df86c4f6c-849x9\" (UID: \"441f5a71-b5fa-4f6f-a825-40eb055760a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.838939 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wdk\" (UniqueName: \"kubernetes.io/projected/651b0e17-dd6c-438b-a213-f4fd1da48cae-kube-api-access-j2wdk\") pod \"frr-k8s-w4xx8\" (UID: \"651b0e17-dd6c-438b-a213-f4fd1da48cae\") " pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.906496 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.915992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lllj2\" (UniqueName: \"kubernetes.io/projected/38acb809-064d-43d6-8800-40cd1cf7f89a-kube-api-access-lllj2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916213 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-metrics-certs\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916341 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-cert\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916466 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-metrics-certs\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916591 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fsn5\" (UniqueName: \"kubernetes.io/projected/129f74a8-c107-4f72-9972-d2c81e811b93-kube-api-access-8fsn5\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916697 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/38acb809-064d-43d6-8800-40cd1cf7f89a-metallb-excludel2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.916808 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: E0131 15:10:35.916931 4735 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:10:35 crc kubenswrapper[4735]: E0131 15:10:35.916997 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist podName:38acb809-064d-43d6-8800-40cd1cf7f89a nodeName:}" failed. No retries permitted until 2026-01-31 15:10:36.416979324 +0000 UTC m=+722.190308356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist") pod "speaker-v7cgt" (UID: "38acb809-064d-43d6-8800-40cd1cf7f89a") : secret "metallb-memberlist" not found Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.917663 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/38acb809-064d-43d6-8800-40cd1cf7f89a-metallb-excludel2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.919767 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-metrics-certs\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.919781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-metrics-certs\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.920733 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.923895 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.930408 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/129f74a8-c107-4f72-9972-d2c81e811b93-cert\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.943715 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lllj2\" (UniqueName: \"kubernetes.io/projected/38acb809-064d-43d6-8800-40cd1cf7f89a-kube-api-access-lllj2\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:35 crc kubenswrapper[4735]: I0131 15:10:35.946323 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fsn5\" (UniqueName: \"kubernetes.io/projected/129f74a8-c107-4f72-9972-d2c81e811b93-kube-api-access-8fsn5\") pod \"controller-6968d8fdc4-j9l75\" (UID: \"129f74a8-c107-4f72-9972-d2c81e811b93\") " pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:36 crc kubenswrapper[4735]: I0131 15:10:36.017867 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:36 crc kubenswrapper[4735]: I0131 15:10:36.125896 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9"] Jan 31 15:10:36 crc kubenswrapper[4735]: W0131 15:10:36.126276 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441f5a71_b5fa_4f6f_a825_40eb055760a0.slice/crio-8866324fe0ec909968dcf492e25a725d29480c7dc70472b091532f9ebf192070 WatchSource:0}: Error finding container 8866324fe0ec909968dcf492e25a725d29480c7dc70472b091532f9ebf192070: Status 404 returned error can't find the container with id 8866324fe0ec909968dcf492e25a725d29480c7dc70472b091532f9ebf192070 Jan 31 15:10:36 crc kubenswrapper[4735]: I0131 15:10:36.429138 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:36 crc kubenswrapper[4735]: E0131 15:10:36.429523 4735 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:10:36 crc kubenswrapper[4735]: E0131 15:10:36.429585 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist podName:38acb809-064d-43d6-8800-40cd1cf7f89a nodeName:}" failed. No retries permitted until 2026-01-31 15:10:37.429568324 +0000 UTC m=+723.202897366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist") pod "speaker-v7cgt" (UID: "38acb809-064d-43d6-8800-40cd1cf7f89a") : secret "metallb-memberlist" not found Jan 31 15:10:36 crc kubenswrapper[4735]: I0131 15:10:36.463798 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-j9l75"] Jan 31 15:10:36 crc kubenswrapper[4735]: W0131 15:10:36.477974 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129f74a8_c107_4f72_9972_d2c81e811b93.slice/crio-b39b143fcdc2b5a786890af96d64d3432d30041b66b4027de8c2a8414ea63843 WatchSource:0}: Error finding container b39b143fcdc2b5a786890af96d64d3432d30041b66b4027de8c2a8414ea63843: Status 404 returned error can't find the container with id b39b143fcdc2b5a786890af96d64d3432d30041b66b4027de8c2a8414ea63843 Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.089452 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-j9l75" event={"ID":"129f74a8-c107-4f72-9972-d2c81e811b93","Type":"ContainerStarted","Data":"7877bf1570b0f9038fbf83e5f96d3779a53754a61ac661b9b9aa8242bcbe5118"} Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.090162 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.090189 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-j9l75" event={"ID":"129f74a8-c107-4f72-9972-d2c81e811b93","Type":"ContainerStarted","Data":"cc9d4054df6a07e43bf80b9a06dcd85b01c40c5c68088ce7b729d42ce759e866"} Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.090208 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-j9l75" event={"ID":"129f74a8-c107-4f72-9972-d2c81e811b93","Type":"ContainerStarted","Data":"b39b143fcdc2b5a786890af96d64d3432d30041b66b4027de8c2a8414ea63843"} Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.091069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"8b139132e88c57ebc142bdce0d6b57186c46e0d29bee4e5ba06656a1cfcfaab6"} Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.092156 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" event={"ID":"441f5a71-b5fa-4f6f-a825-40eb055760a0","Type":"ContainerStarted","Data":"8866324fe0ec909968dcf492e25a725d29480c7dc70472b091532f9ebf192070"} Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.111766 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-j9l75" podStartSLOduration=2.11173858 podStartE2EDuration="2.11173858s" podCreationTimestamp="2026-01-31 15:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:10:37.105879164 +0000 UTC m=+722.879208206" watchObservedRunningTime="2026-01-31 15:10:37.11173858 +0000 UTC m=+722.885067642" Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.346262 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.346368 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.443343 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.450076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/38acb809-064d-43d6-8800-40cd1cf7f89a-memberlist\") pod \"speaker-v7cgt\" (UID: \"38acb809-064d-43d6-8800-40cd1cf7f89a\") " pod="metallb-system/speaker-v7cgt" Jan 31 15:10:37 crc kubenswrapper[4735]: I0131 15:10:37.505948 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v7cgt" Jan 31 15:10:37 crc kubenswrapper[4735]: W0131 15:10:37.564285 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38acb809_064d_43d6_8800_40cd1cf7f89a.slice/crio-0065d362d08a6f25b6fcd1d8352193e74dec5f90204d1cacd7ed1e175f3231a0 WatchSource:0}: Error finding container 0065d362d08a6f25b6fcd1d8352193e74dec5f90204d1cacd7ed1e175f3231a0: Status 404 returned error can't find the container with id 0065d362d08a6f25b6fcd1d8352193e74dec5f90204d1cacd7ed1e175f3231a0 Jan 31 15:10:38 crc kubenswrapper[4735]: I0131 15:10:38.105806 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7cgt" event={"ID":"38acb809-064d-43d6-8800-40cd1cf7f89a","Type":"ContainerStarted","Data":"51efbfab53364b82d0aa495c04f1ef48ec7daa122e46477e38481cb72a6de500"} Jan 31 15:10:38 crc kubenswrapper[4735]: I0131 15:10:38.105851 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7cgt" event={"ID":"38acb809-064d-43d6-8800-40cd1cf7f89a","Type":"ContainerStarted","Data":"0065d362d08a6f25b6fcd1d8352193e74dec5f90204d1cacd7ed1e175f3231a0"} Jan 31 15:10:39 crc kubenswrapper[4735]: I0131 15:10:39.114575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v7cgt" event={"ID":"38acb809-064d-43d6-8800-40cd1cf7f89a","Type":"ContainerStarted","Data":"65824b307dcf2a521eab16d0e401668fb77e8020a95457fcb09d8e3fbd51afdf"} Jan 31 15:10:39 crc kubenswrapper[4735]: I0131 15:10:39.115445 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v7cgt" Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.163952 4735 generic.go:334] "Generic (PLEG): container finished" podID="651b0e17-dd6c-438b-a213-f4fd1da48cae" containerID="b4fdbbf8f23506931451931d567030de2ef0823bbafec4143349df7640c67f58" exitCode=0 Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.164030 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerDied","Data":"b4fdbbf8f23506931451931d567030de2ef0823bbafec4143349df7640c67f58"} Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.168320 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" event={"ID":"441f5a71-b5fa-4f6f-a825-40eb055760a0","Type":"ContainerStarted","Data":"912e5fbb77b40219d7dc512241ab2a5c9484cf256156234ba44325472053eed5"} Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.169248 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.200656 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v7cgt" podStartSLOduration=9.200637102 podStartE2EDuration="9.200637102s" podCreationTimestamp="2026-01-31 15:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:10:39.161746757 +0000 UTC m=+724.935075809" watchObservedRunningTime="2026-01-31 15:10:44.200637102 +0000 UTC m=+729.973966184" Jan 31 15:10:44 crc kubenswrapper[4735]: I0131 15:10:44.228904 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" podStartSLOduration=2.125749447 podStartE2EDuration="9.228880681s" podCreationTimestamp="2026-01-31 15:10:35 +0000 UTC" firstStartedPulling="2026-01-31 15:10:36.128930545 +0000 UTC m=+721.902259587" lastFinishedPulling="2026-01-31 15:10:43.232061779 +0000 UTC m=+729.005390821" observedRunningTime="2026-01-31 15:10:44.221698608 +0000 UTC m=+729.995027710" watchObservedRunningTime="2026-01-31 15:10:44.228880681 +0000 UTC m=+730.002209753" Jan 31 15:10:45 crc kubenswrapper[4735]: I0131 15:10:45.180389 4735 generic.go:334] "Generic (PLEG): container finished" podID="651b0e17-dd6c-438b-a213-f4fd1da48cae" containerID="e1c976df758a534b02d79a85afc31914bc83de382b6d70a5bbd4063db0bb6cec" exitCode=0 Jan 31 15:10:45 crc kubenswrapper[4735]: I0131 15:10:45.180516 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerDied","Data":"e1c976df758a534b02d79a85afc31914bc83de382b6d70a5bbd4063db0bb6cec"} Jan 31 15:10:46 crc kubenswrapper[4735]: I0131 15:10:46.023049 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-j9l75" Jan 31 15:10:46 crc kubenswrapper[4735]: I0131 15:10:46.189707 4735 generic.go:334] "Generic (PLEG): container finished" podID="651b0e17-dd6c-438b-a213-f4fd1da48cae" containerID="3f70acb40a271b2e2d3be6ecc034607b0ad58e837bf40004e54909820e7afe44" exitCode=0 Jan 31 15:10:46 crc kubenswrapper[4735]: I0131 15:10:46.189756 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerDied","Data":"3f70acb40a271b2e2d3be6ecc034607b0ad58e837bf40004e54909820e7afe44"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.204617 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"a95db1b1c9f1b155a6cccfbe8913d683483b80b6774f5b26b4a7311ced2cf010"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.205020 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"554aa8d871d8ebec02889cb44bae7e17cbf6c3ee6d596e25c4a694e479f2b3e6"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.205042 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"f1ad905635e0cde1f6f2bf903a0bcd31ae13942729e67cc1add7751e1dd6f226"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.205058 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"423ec7b274c2821de858945eda5da851fd0c1cae69ae0fdf57b1bcc6d9cb7c6e"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.205079 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"e3af9616112ed3db656d6bbbd5e02d8b1e7783227be8c07f41f2ebedfba2b72f"} Jan 31 15:10:47 crc kubenswrapper[4735]: I0131 15:10:47.511865 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v7cgt" Jan 31 15:10:48 crc kubenswrapper[4735]: I0131 15:10:48.221375 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w4xx8" event={"ID":"651b0e17-dd6c-438b-a213-f4fd1da48cae","Type":"ContainerStarted","Data":"6e223d9a3e59bcf0c389089f0dac7912b374c37189ed578997180d4baaed1e15"} Jan 31 15:10:48 crc kubenswrapper[4735]: I0131 15:10:48.221869 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:48 crc kubenswrapper[4735]: I0131 15:10:48.299351 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-w4xx8" podStartSLOduration=6.184161233 podStartE2EDuration="13.299330928s" podCreationTimestamp="2026-01-31 15:10:35 +0000 UTC" firstStartedPulling="2026-01-31 15:10:36.130571711 +0000 UTC m=+721.903900753" lastFinishedPulling="2026-01-31 15:10:43.245741406 +0000 UTC m=+729.019070448" observedRunningTime="2026-01-31 15:10:48.296627711 +0000 UTC m=+734.069956803" watchObservedRunningTime="2026-01-31 15:10:48.299330928 +0000 UTC m=+734.072659970" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.346684 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.347570 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.352712 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kmhwr" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.352880 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.357676 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.361283 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.439880 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlsmg\" (UniqueName: \"kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg\") pod \"openstack-operator-index-n7gbx\" (UID: \"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7\") " pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.540913 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlsmg\" (UniqueName: \"kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg\") pod \"openstack-operator-index-n7gbx\" (UID: \"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7\") " pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.559503 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlsmg\" (UniqueName: \"kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg\") pod \"openstack-operator-index-n7gbx\" (UID: \"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7\") " pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.662565 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.907502 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:50 crc kubenswrapper[4735]: I0131 15:10:50.955357 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:10:51 crc kubenswrapper[4735]: I0131 15:10:51.146097 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:51 crc kubenswrapper[4735]: W0131 15:10:51.151770 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4367b2b_26a8_4c06_bee3_f1fdca03c1e7.slice/crio-912d114c9559cef2229333a34c76a206da5ad5f09d95cfb4393233218120765b WatchSource:0}: Error finding container 912d114c9559cef2229333a34c76a206da5ad5f09d95cfb4393233218120765b: Status 404 returned error can't find the container with id 912d114c9559cef2229333a34c76a206da5ad5f09d95cfb4393233218120765b Jan 31 15:10:51 crc kubenswrapper[4735]: I0131 15:10:51.241160 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7gbx" event={"ID":"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7","Type":"ContainerStarted","Data":"912d114c9559cef2229333a34c76a206da5ad5f09d95cfb4393233218120765b"} Jan 31 15:10:53 crc kubenswrapper[4735]: I0131 15:10:53.680043 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.261904 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7gbx" event={"ID":"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7","Type":"ContainerStarted","Data":"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd"} Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.261984 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n7gbx" podUID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" containerName="registry-server" containerID="cri-o://465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd" gracePeriod=2 Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.285033 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n7gbx" podStartSLOduration=1.47652453 podStartE2EDuration="4.284997579s" podCreationTimestamp="2026-01-31 15:10:50 +0000 UTC" firstStartedPulling="2026-01-31 15:10:51.154059014 +0000 UTC m=+736.927388066" lastFinishedPulling="2026-01-31 15:10:53.962532063 +0000 UTC m=+739.735861115" observedRunningTime="2026-01-31 15:10:54.277973581 +0000 UTC m=+740.051302643" watchObservedRunningTime="2026-01-31 15:10:54.284997579 +0000 UTC m=+740.058326621" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.302387 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pp8kj"] Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.303864 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.308644 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pp8kj"] Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.396374 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2ll\" (UniqueName: \"kubernetes.io/projected/5a0ed87d-afcc-44e0-a590-4f56b4338cb7-kube-api-access-5b2ll\") pod \"openstack-operator-index-pp8kj\" (UID: \"5a0ed87d-afcc-44e0-a590-4f56b4338cb7\") " pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.497270 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2ll\" (UniqueName: \"kubernetes.io/projected/5a0ed87d-afcc-44e0-a590-4f56b4338cb7-kube-api-access-5b2ll\") pod \"openstack-operator-index-pp8kj\" (UID: \"5a0ed87d-afcc-44e0-a590-4f56b4338cb7\") " pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.526251 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2ll\" (UniqueName: \"kubernetes.io/projected/5a0ed87d-afcc-44e0-a590-4f56b4338cb7-kube-api-access-5b2ll\") pod \"openstack-operator-index-pp8kj\" (UID: \"5a0ed87d-afcc-44e0-a590-4f56b4338cb7\") " pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.662860 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.699162 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.802986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlsmg\" (UniqueName: \"kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg\") pod \"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7\" (UID: \"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7\") " Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.808283 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg" (OuterVolumeSpecName: "kube-api-access-dlsmg") pod "a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" (UID: "a4367b2b-26a8-4c06-bee3-f1fdca03c1e7"). InnerVolumeSpecName "kube-api-access-dlsmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:54 crc kubenswrapper[4735]: I0131 15:10:54.904876 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlsmg\" (UniqueName: \"kubernetes.io/projected/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7-kube-api-access-dlsmg\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:55 crc kubenswrapper[4735]: W0131 15:10:55.143643 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a0ed87d_afcc_44e0_a590_4f56b4338cb7.slice/crio-98700aab1621a10ea72c6b9e60a9fcc550742ab2eb5475e7952e644529a45fea WatchSource:0}: Error finding container 98700aab1621a10ea72c6b9e60a9fcc550742ab2eb5475e7952e644529a45fea: Status 404 returned error can't find the container with id 98700aab1621a10ea72c6b9e60a9fcc550742ab2eb5475e7952e644529a45fea Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.145337 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pp8kj"] Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.272367 4735 generic.go:334] "Generic (PLEG): container finished" podID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" containerID="465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd" exitCode=0 Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.272574 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n7gbx" Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.272815 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7gbx" event={"ID":"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7","Type":"ContainerDied","Data":"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd"} Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.273366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n7gbx" event={"ID":"a4367b2b-26a8-4c06-bee3-f1fdca03c1e7","Type":"ContainerDied","Data":"912d114c9559cef2229333a34c76a206da5ad5f09d95cfb4393233218120765b"} Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.273385 4735 scope.go:117] "RemoveContainer" containerID="465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd" Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.278065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pp8kj" event={"ID":"5a0ed87d-afcc-44e0-a590-4f56b4338cb7","Type":"ContainerStarted","Data":"98700aab1621a10ea72c6b9e60a9fcc550742ab2eb5475e7952e644529a45fea"} Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.316620 4735 scope.go:117] "RemoveContainer" containerID="465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd" Jan 31 15:10:55 crc kubenswrapper[4735]: E0131 15:10:55.317913 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd\": container with ID starting with 465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd not found: ID does not exist" containerID="465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd" Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.318093 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd"} err="failed to get container status \"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd\": rpc error: code = NotFound desc = could not find container \"465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd\": container with ID starting with 465856d9130d957ea24601a96bd4116b71d0a91a449c8ed76db111502a6aeebd not found: ID does not exist" Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.319704 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.324931 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n7gbx"] Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.560701 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" path="/var/lib/kubelet/pods/a4367b2b-26a8-4c06-bee3-f1fdca03c1e7/volumes" Jan 31 15:10:55 crc kubenswrapper[4735]: I0131 15:10:55.929640 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-849x9" Jan 31 15:10:56 crc kubenswrapper[4735]: I0131 15:10:56.288567 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pp8kj" event={"ID":"5a0ed87d-afcc-44e0-a590-4f56b4338cb7","Type":"ContainerStarted","Data":"f7b0e34fcb13a948a1dda310108daaa3c6c7b6be4981714ace45dd1511a9b90a"} Jan 31 15:10:56 crc kubenswrapper[4735]: I0131 15:10:56.335340 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pp8kj" podStartSLOduration=2.281792782 podStartE2EDuration="2.335311525s" podCreationTimestamp="2026-01-31 15:10:54 +0000 UTC" firstStartedPulling="2026-01-31 15:10:55.148518202 +0000 UTC m=+740.921847304" lastFinishedPulling="2026-01-31 15:10:55.202036975 +0000 UTC m=+740.975366047" observedRunningTime="2026-01-31 15:10:56.306529751 +0000 UTC m=+742.079858833" watchObservedRunningTime="2026-01-31 15:10:56.335311525 +0000 UTC m=+742.108640567" Jan 31 15:11:04 crc kubenswrapper[4735]: I0131 15:11:04.700032 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:11:04 crc kubenswrapper[4735]: I0131 15:11:04.701042 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:11:04 crc kubenswrapper[4735]: I0131 15:11:04.748999 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:11:05 crc kubenswrapper[4735]: I0131 15:11:05.390550 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pp8kj" Jan 31 15:11:05 crc kubenswrapper[4735]: I0131 15:11:05.912409 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-w4xx8" Jan 31 15:11:07 crc kubenswrapper[4735]: I0131 15:11:07.346308 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:11:07 crc kubenswrapper[4735]: I0131 15:11:07.346797 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:11:11 crc kubenswrapper[4735]: I0131 15:11:11.883732 4735 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.350097 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz"] Jan 31 15:11:12 crc kubenswrapper[4735]: E0131 15:11:12.350571 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" containerName="registry-server" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.350647 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" containerName="registry-server" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.350802 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4367b2b-26a8-4c06-bee3-f1fdca03c1e7" containerName="registry-server" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.351768 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.354288 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gcn7q" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.357513 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz"] Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.450779 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2wm\" (UniqueName: \"kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.451307 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.451657 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.553851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2wm\" (UniqueName: \"kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.554326 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.554785 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.555890 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.555965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.587897 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2wm\" (UniqueName: \"kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm\") pod \"97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:12 crc kubenswrapper[4735]: I0131 15:11:12.665521 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:13 crc kubenswrapper[4735]: I0131 15:11:13.161834 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz"] Jan 31 15:11:13 crc kubenswrapper[4735]: I0131 15:11:13.412032 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerStarted","Data":"e5e767b1f40d07567998fe2fbcef3d12d0b178c203666694690d6f5054b5ccb2"} Jan 31 15:11:13 crc kubenswrapper[4735]: I0131 15:11:13.412156 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerStarted","Data":"271e90190fa1956b3c3786fb69660fcf91d7aaee38f208a163629c1825e18c84"} Jan 31 15:11:14 crc kubenswrapper[4735]: I0131 15:11:14.424082 4735 generic.go:334] "Generic (PLEG): container finished" podID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerID="e5e767b1f40d07567998fe2fbcef3d12d0b178c203666694690d6f5054b5ccb2" exitCode=0 Jan 31 15:11:14 crc kubenswrapper[4735]: I0131 15:11:14.424380 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerDied","Data":"e5e767b1f40d07567998fe2fbcef3d12d0b178c203666694690d6f5054b5ccb2"} Jan 31 15:11:15 crc kubenswrapper[4735]: I0131 15:11:15.436390 4735 generic.go:334] "Generic (PLEG): container finished" podID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerID="3c00c58949b56ebfb4fdfc39e446c0e249df9104e50436b9ac5b49b2774a93d8" exitCode=0 Jan 31 15:11:15 crc kubenswrapper[4735]: I0131 15:11:15.436590 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerDied","Data":"3c00c58949b56ebfb4fdfc39e446c0e249df9104e50436b9ac5b49b2774a93d8"} Jan 31 15:11:16 crc kubenswrapper[4735]: I0131 15:11:16.446347 4735 generic.go:334] "Generic (PLEG): container finished" podID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerID="e5aab410766d8eeec95850e9695cf834bbcfc3ea9fe7c23d5350e9239bf81d4d" exitCode=0 Jan 31 15:11:16 crc kubenswrapper[4735]: I0131 15:11:16.446392 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerDied","Data":"e5aab410766d8eeec95850e9695cf834bbcfc3ea9fe7c23d5350e9239bf81d4d"} Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.767831 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.839828 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util\") pod \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.839908 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg2wm\" (UniqueName: \"kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm\") pod \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.839956 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle\") pod \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\" (UID: \"2bcbc1aa-0b7f-4aa4-a553-25427de0a734\") " Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.840830 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle" (OuterVolumeSpecName: "bundle") pod "2bcbc1aa-0b7f-4aa4-a553-25427de0a734" (UID: "2bcbc1aa-0b7f-4aa4-a553-25427de0a734"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.847899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm" (OuterVolumeSpecName: "kube-api-access-kg2wm") pod "2bcbc1aa-0b7f-4aa4-a553-25427de0a734" (UID: "2bcbc1aa-0b7f-4aa4-a553-25427de0a734"). InnerVolumeSpecName "kube-api-access-kg2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.854456 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util" (OuterVolumeSpecName: "util") pod "2bcbc1aa-0b7f-4aa4-a553-25427de0a734" (UID: "2bcbc1aa-0b7f-4aa4-a553-25427de0a734"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.941633 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.941677 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg2wm\" (UniqueName: \"kubernetes.io/projected/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-kube-api-access-kg2wm\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4735]: I0131 15:11:17.941690 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bcbc1aa-0b7f-4aa4-a553-25427de0a734-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:18 crc kubenswrapper[4735]: I0131 15:11:18.462616 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" event={"ID":"2bcbc1aa-0b7f-4aa4-a553-25427de0a734","Type":"ContainerDied","Data":"271e90190fa1956b3c3786fb69660fcf91d7aaee38f208a163629c1825e18c84"} Jan 31 15:11:18 crc kubenswrapper[4735]: I0131 15:11:18.463022 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271e90190fa1956b3c3786fb69660fcf91d7aaee38f208a163629c1825e18c84" Jan 31 15:11:18 crc kubenswrapper[4735]: I0131 15:11:18.462705 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.486619 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp"] Jan 31 15:11:25 crc kubenswrapper[4735]: E0131 15:11:25.488381 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="extract" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.488524 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="extract" Jan 31 15:11:25 crc kubenswrapper[4735]: E0131 15:11:25.488612 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="util" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.488691 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="util" Jan 31 15:11:25 crc kubenswrapper[4735]: E0131 15:11:25.488770 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="pull" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.488841 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="pull" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.489083 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcbc1aa-0b7f-4aa4-a553-25427de0a734" containerName="extract" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.489687 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.493401 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wqmpv" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.534038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp"] Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.543233 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdq9m\" (UniqueName: \"kubernetes.io/projected/7043e467-d103-458b-a498-c110f06809f1-kube-api-access-vdq9m\") pod \"openstack-operator-controller-init-7b494f4958-8qvfp\" (UID: \"7043e467-d103-458b-a498-c110f06809f1\") " pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.645194 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdq9m\" (UniqueName: \"kubernetes.io/projected/7043e467-d103-458b-a498-c110f06809f1-kube-api-access-vdq9m\") pod \"openstack-operator-controller-init-7b494f4958-8qvfp\" (UID: \"7043e467-d103-458b-a498-c110f06809f1\") " pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.664937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdq9m\" (UniqueName: \"kubernetes.io/projected/7043e467-d103-458b-a498-c110f06809f1-kube-api-access-vdq9m\") pod \"openstack-operator-controller-init-7b494f4958-8qvfp\" (UID: \"7043e467-d103-458b-a498-c110f06809f1\") " pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:25 crc kubenswrapper[4735]: I0131 15:11:25.813495 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:26 crc kubenswrapper[4735]: I0131 15:11:26.043176 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp"] Jan 31 15:11:26 crc kubenswrapper[4735]: I0131 15:11:26.535058 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" event={"ID":"7043e467-d103-458b-a498-c110f06809f1","Type":"ContainerStarted","Data":"9a037b6d7bf6c978b7ac3eb0c17458a12d05054935d4a130e0993921cfabd50a"} Jan 31 15:11:30 crc kubenswrapper[4735]: I0131 15:11:30.570711 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" event={"ID":"7043e467-d103-458b-a498-c110f06809f1","Type":"ContainerStarted","Data":"7f8a56fb53dcee27c44864e1f54a872a825ebf5f734a38269d33870f2b4ed1e6"} Jan 31 15:11:30 crc kubenswrapper[4735]: I0131 15:11:30.571365 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:30 crc kubenswrapper[4735]: I0131 15:11:30.608957 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" podStartSLOduration=1.43383915 podStartE2EDuration="5.608942484s" podCreationTimestamp="2026-01-31 15:11:25 +0000 UTC" firstStartedPulling="2026-01-31 15:11:26.049812813 +0000 UTC m=+771.823141855" lastFinishedPulling="2026-01-31 15:11:30.224916137 +0000 UTC m=+775.998245189" observedRunningTime="2026-01-31 15:11:30.606846755 +0000 UTC m=+776.380175797" watchObservedRunningTime="2026-01-31 15:11:30.608942484 +0000 UTC m=+776.382271526" Jan 31 15:11:35 crc kubenswrapper[4735]: I0131 15:11:35.818870 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7b494f4958-8qvfp" Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.346324 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.346756 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.346823 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.347751 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.347870 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638" gracePeriod=600 Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.639553 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638" exitCode=0 Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.639598 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638"} Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.639628 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264"} Jan 31 15:11:37 crc kubenswrapper[4735]: I0131 15:11:37.639661 4735 scope.go:117] "RemoveContainer" containerID="2ee3fe8e50600c2e145c48a8270712d86ce84318289a9ff4b93b03e43cfff377" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.614619 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.616056 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.618338 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7pqqd" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.632013 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.632901 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.637880 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vlvpw" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.647148 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.661899 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.691488 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.692408 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.697799 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-khdc4" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.704237 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.737484 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.738406 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.747240 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lkn6q" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.772510 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.773755 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.777710 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4z84c" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.789622 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stswz\" (UniqueName: \"kubernetes.io/projected/c4915a12-75dc-4b2e-a039-c98287c8cec4-kube-api-access-stswz\") pod \"heat-operator-controller-manager-65dc6c8d9c-r7xlm\" (UID: \"c4915a12-75dc-4b2e-a039-c98287c8cec4\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.789678 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrht\" (UniqueName: \"kubernetes.io/projected/e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2-kube-api-access-clrht\") pod \"glance-operator-controller-manager-64d858bbbd-k4bh2\" (UID: \"e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2\") " pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.789698 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nmt\" (UniqueName: \"kubernetes.io/projected/6cc9c424-b3f7-4744-92d8-5844915879bf-kube-api-access-z7nmt\") pod \"barbican-operator-controller-manager-fc589b45f-qh6xs\" (UID: \"6cc9c424-b3f7-4744-92d8-5844915879bf\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.789733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76n2\" (UniqueName: \"kubernetes.io/projected/a27712fb-eb89-49ff-b5a5-1432a0a4774f-kube-api-access-w76n2\") pod \"cinder-operator-controller-manager-787499fbb-drsgx\" (UID: \"a27712fb-eb89-49ff-b5a5-1432a0a4774f\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.789754 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlpp\" (UniqueName: \"kubernetes.io/projected/d0a68002-1422-44d3-8656-2901a42b42f4-kube-api-access-8hlpp\") pod \"designate-operator-controller-manager-8f4c5cb64-bxf2k\" (UID: \"d0a68002-1422-44d3-8656-2901a42b42f4\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.794579 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.806497 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.815150 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.816264 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.820806 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.826008 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2pf5z" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.889819 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-66z2p"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.891166 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.892723 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76n2\" (UniqueName: \"kubernetes.io/projected/a27712fb-eb89-49ff-b5a5-1432a0a4774f-kube-api-access-w76n2\") pod \"cinder-operator-controller-manager-787499fbb-drsgx\" (UID: \"a27712fb-eb89-49ff-b5a5-1432a0a4774f\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.892771 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlpp\" (UniqueName: \"kubernetes.io/projected/d0a68002-1422-44d3-8656-2901a42b42f4-kube-api-access-8hlpp\") pod \"designate-operator-controller-manager-8f4c5cb64-bxf2k\" (UID: \"d0a68002-1422-44d3-8656-2901a42b42f4\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.892834 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stswz\" (UniqueName: \"kubernetes.io/projected/c4915a12-75dc-4b2e-a039-c98287c8cec4-kube-api-access-stswz\") pod \"heat-operator-controller-manager-65dc6c8d9c-r7xlm\" (UID: \"c4915a12-75dc-4b2e-a039-c98287c8cec4\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.892872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrht\" (UniqueName: \"kubernetes.io/projected/e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2-kube-api-access-clrht\") pod \"glance-operator-controller-manager-64d858bbbd-k4bh2\" (UID: \"e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2\") " pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.892894 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nmt\" (UniqueName: \"kubernetes.io/projected/6cc9c424-b3f7-4744-92d8-5844915879bf-kube-api-access-z7nmt\") pod \"barbican-operator-controller-manager-fc589b45f-qh6xs\" (UID: \"6cc9c424-b3f7-4744-92d8-5844915879bf\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.903992 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.904377 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qzzct" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.945515 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76n2\" (UniqueName: \"kubernetes.io/projected/a27712fb-eb89-49ff-b5a5-1432a0a4774f-kube-api-access-w76n2\") pod \"cinder-operator-controller-manager-787499fbb-drsgx\" (UID: \"a27712fb-eb89-49ff-b5a5-1432a0a4774f\") " pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.951241 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stswz\" (UniqueName: \"kubernetes.io/projected/c4915a12-75dc-4b2e-a039-c98287c8cec4-kube-api-access-stswz\") pod \"heat-operator-controller-manager-65dc6c8d9c-r7xlm\" (UID: \"c4915a12-75dc-4b2e-a039-c98287c8cec4\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.952860 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlpp\" (UniqueName: \"kubernetes.io/projected/d0a68002-1422-44d3-8656-2901a42b42f4-kube-api-access-8hlpp\") pod \"designate-operator-controller-manager-8f4c5cb64-bxf2k\" (UID: \"d0a68002-1422-44d3-8656-2901a42b42f4\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.958602 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-66z2p"] Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.961143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nmt\" (UniqueName: \"kubernetes.io/projected/6cc9c424-b3f7-4744-92d8-5844915879bf-kube-api-access-z7nmt\") pod \"barbican-operator-controller-manager-fc589b45f-qh6xs\" (UID: \"6cc9c424-b3f7-4744-92d8-5844915879bf\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.969654 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.970018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrht\" (UniqueName: \"kubernetes.io/projected/e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2-kube-api-access-clrht\") pod \"glance-operator-controller-manager-64d858bbbd-k4bh2\" (UID: \"e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2\") " pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.995621 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfd8\" (UniqueName: \"kubernetes.io/projected/627cef1f-bb76-4dd2-b7d1-b3f55bdeb335-kube-api-access-nhfd8\") pod \"horizon-operator-controller-manager-5fb775575f-bb6l7\" (UID: \"627cef1f-bb76-4dd2-b7d1-b3f55bdeb335\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.995696 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gjk\" (UniqueName: \"kubernetes.io/projected/8d42c163-9e7d-485f-b94e-4796166ba8f9-kube-api-access-l2gjk\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.995721 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:10 crc kubenswrapper[4735]: I0131 15:12:10.999530 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.000299 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.009885 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-86xkv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.021051 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.021806 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.026983 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7jhf7" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.031764 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.043952 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.072886 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.074494 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.084038 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.085134 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8jpjb" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.088921 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.102546 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.105355 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhfd8\" (UniqueName: \"kubernetes.io/projected/627cef1f-bb76-4dd2-b7d1-b3f55bdeb335-kube-api-access-nhfd8\") pod \"horizon-operator-controller-manager-5fb775575f-bb6l7\" (UID: \"627cef1f-bb76-4dd2-b7d1-b3f55bdeb335\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.105409 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xccbs\" (UniqueName: \"kubernetes.io/projected/ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d-kube-api-access-xccbs\") pod \"ironic-operator-controller-manager-87bd9d46f-hzrws\" (UID: \"ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.105471 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtbq\" (UniqueName: \"kubernetes.io/projected/0aef4ea8-3e5e-497e-b2bd-280d521e895f-kube-api-access-lgtbq\") pod \"keystone-operator-controller-manager-64469b487f-rfrcn\" (UID: \"0aef4ea8-3e5e-497e-b2bd-280d521e895f\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.105546 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wfs\" (UniqueName: \"kubernetes.io/projected/5926394d-8ab0-46d7-9bb6-1ea59a0d7511-kube-api-access-d4wfs\") pod \"mariadb-operator-controller-manager-67bf948998-7v4mn\" (UID: \"5926394d-8ab0-46d7-9bb6-1ea59a0d7511\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.105581 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gjk\" (UniqueName: \"kubernetes.io/projected/8d42c163-9e7d-485f-b94e-4796166ba8f9-kube-api-access-l2gjk\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.110810 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.111101 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.111168 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert podName:8d42c163-9e7d-485f-b94e-4796166ba8f9 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:11.611146045 +0000 UTC m=+817.384475087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert") pod "infra-operator-controller-manager-79955696d6-66z2p" (UID: "8d42c163-9e7d-485f-b94e-4796166ba8f9") : secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.123698 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.124721 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.126446 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jsq6l" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.139475 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gjk\" (UniqueName: \"kubernetes.io/projected/8d42c163-9e7d-485f-b94e-4796166ba8f9-kube-api-access-l2gjk\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.139994 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhfd8\" (UniqueName: \"kubernetes.io/projected/627cef1f-bb76-4dd2-b7d1-b3f55bdeb335-kube-api-access-nhfd8\") pod \"horizon-operator-controller-manager-5fb775575f-bb6l7\" (UID: \"627cef1f-bb76-4dd2-b7d1-b3f55bdeb335\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.140062 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.140887 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.145146 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rhgqp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.147849 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.158366 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.163344 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.164080 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.165792 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gxbpv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.170244 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.178574 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.180130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.184748 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.185879 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.186114 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9d2ml" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.190371 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.191242 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.196261 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.197094 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.205028 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.205246 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-28txq" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.205352 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-klwcr" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.207657 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212237 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xccbs\" (UniqueName: \"kubernetes.io/projected/ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d-kube-api-access-xccbs\") pod \"ironic-operator-controller-manager-87bd9d46f-hzrws\" (UID: \"ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212337 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9ld\" (UniqueName: \"kubernetes.io/projected/c814b622-e60d-492c-ae86-9e78b37297e4-kube-api-access-bx9ld\") pod \"neutron-operator-controller-manager-576995988b-rtzcv\" (UID: \"c814b622-e60d-492c-ae86-9e78b37297e4\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212368 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcgj\" (UniqueName: \"kubernetes.io/projected/b469fe09-816f-4ffa-a61d-82e448011837-kube-api-access-gvcgj\") pod \"octavia-operator-controller-manager-7b89ddb58-f8f64\" (UID: \"b469fe09-816f-4ffa-a61d-82e448011837\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212402 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgtbq\" (UniqueName: \"kubernetes.io/projected/0aef4ea8-3e5e-497e-b2bd-280d521e895f-kube-api-access-lgtbq\") pod \"keystone-operator-controller-manager-64469b487f-rfrcn\" (UID: \"0aef4ea8-3e5e-497e-b2bd-280d521e895f\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212459 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qz8\" (UniqueName: \"kubernetes.io/projected/f2610081-f50c-441f-8b8a-bc2a236065f1-kube-api-access-99qz8\") pod \"nova-operator-controller-manager-5644b66645-f8h8s\" (UID: \"f2610081-f50c-441f-8b8a-bc2a236065f1\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wfs\" (UniqueName: \"kubernetes.io/projected/5926394d-8ab0-46d7-9bb6-1ea59a0d7511-kube-api-access-d4wfs\") pod \"mariadb-operator-controller-manager-67bf948998-7v4mn\" (UID: \"5926394d-8ab0-46d7-9bb6-1ea59a0d7511\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212790 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw226\" (UniqueName: \"kubernetes.io/projected/437ef1c6-09b5-45c2-b88d-e42e432ae801-kube-api-access-dw226\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212890 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67qb\" (UniqueName: \"kubernetes.io/projected/8bc95764-b0cb-4206-af35-fefb00d8c71f-kube-api-access-r67qb\") pod \"ovn-operator-controller-manager-788c46999f-thcqg\" (UID: \"8bc95764-b0cb-4206-af35-fefb00d8c71f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.212926 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqhp\" (UniqueName: \"kubernetes.io/projected/f4b1920b-1fb0-4f10-a3fc-97d19aacc34e-kube-api-access-fpqhp\") pod \"manila-operator-controller-manager-7d96d95959-jxthf\" (UID: \"f4b1920b-1fb0-4f10-a3fc-97d19aacc34e\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.220612 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.230595 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.231408 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.233028 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xccbs\" (UniqueName: \"kubernetes.io/projected/ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d-kube-api-access-xccbs\") pod \"ironic-operator-controller-manager-87bd9d46f-hzrws\" (UID: \"ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.233155 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zmg7p" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.235925 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.236197 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wfs\" (UniqueName: \"kubernetes.io/projected/5926394d-8ab0-46d7-9bb6-1ea59a0d7511-kube-api-access-d4wfs\") pod \"mariadb-operator-controller-manager-67bf948998-7v4mn\" (UID: \"5926394d-8ab0-46d7-9bb6-1ea59a0d7511\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.237899 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.245601 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.255693 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.256572 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.259997 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.261045 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.266076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgtbq\" (UniqueName: \"kubernetes.io/projected/0aef4ea8-3e5e-497e-b2bd-280d521e895f-kube-api-access-lgtbq\") pod \"keystone-operator-controller-manager-64469b487f-rfrcn\" (UID: \"0aef4ea8-3e5e-497e-b2bd-280d521e895f\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.267244 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-h7mmd" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.267472 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cpw8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.273184 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.273226 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.277195 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.278039 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.281031 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rhwgt" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.303060 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331082 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r67qb\" (UniqueName: \"kubernetes.io/projected/8bc95764-b0cb-4206-af35-fefb00d8c71f-kube-api-access-r67qb\") pod \"ovn-operator-controller-manager-788c46999f-thcqg\" (UID: \"8bc95764-b0cb-4206-af35-fefb00d8c71f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331158 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcqn\" (UniqueName: \"kubernetes.io/projected/57aa7be3-f130-41b7-a400-1c2ddd1b8ce3-kube-api-access-xvcqn\") pod \"test-operator-controller-manager-56f8bfcd9f-dlpt5\" (UID: \"57aa7be3-f130-41b7-a400-1c2ddd1b8ce3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331192 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqhp\" (UniqueName: \"kubernetes.io/projected/f4b1920b-1fb0-4f10-a3fc-97d19aacc34e-kube-api-access-fpqhp\") pod \"manila-operator-controller-manager-7d96d95959-jxthf\" (UID: \"f4b1920b-1fb0-4f10-a3fc-97d19aacc34e\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331265 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9ld\" (UniqueName: \"kubernetes.io/projected/c814b622-e60d-492c-ae86-9e78b37297e4-kube-api-access-bx9ld\") pod \"neutron-operator-controller-manager-576995988b-rtzcv\" (UID: \"c814b622-e60d-492c-ae86-9e78b37297e4\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331292 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvcgj\" (UniqueName: \"kubernetes.io/projected/b469fe09-816f-4ffa-a61d-82e448011837-kube-api-access-gvcgj\") pod \"octavia-operator-controller-manager-7b89ddb58-f8f64\" (UID: \"b469fe09-816f-4ffa-a61d-82e448011837\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331367 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qz8\" (UniqueName: \"kubernetes.io/projected/f2610081-f50c-441f-8b8a-bc2a236065f1-kube-api-access-99qz8\") pod \"nova-operator-controller-manager-5644b66645-f8h8s\" (UID: \"f2610081-f50c-441f-8b8a-bc2a236065f1\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.331409 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw226\" (UniqueName: \"kubernetes.io/projected/437ef1c6-09b5-45c2-b88d-e42e432ae801-kube-api-access-dw226\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.338813 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.338874 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert podName:437ef1c6-09b5-45c2-b88d-e42e432ae801 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:11.838859025 +0000 UTC m=+817.612188067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" (UID: "437ef1c6-09b5-45c2-b88d-e42e432ae801") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.359071 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r67qb\" (UniqueName: \"kubernetes.io/projected/8bc95764-b0cb-4206-af35-fefb00d8c71f-kube-api-access-r67qb\") pod \"ovn-operator-controller-manager-788c46999f-thcqg\" (UID: \"8bc95764-b0cb-4206-af35-fefb00d8c71f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.369163 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.376582 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.377059 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.381834 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rnjs6" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.387676 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvcgj\" (UniqueName: \"kubernetes.io/projected/b469fe09-816f-4ffa-a61d-82e448011837-kube-api-access-gvcgj\") pod \"octavia-operator-controller-manager-7b89ddb58-f8f64\" (UID: \"b469fe09-816f-4ffa-a61d-82e448011837\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.396394 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qz8\" (UniqueName: \"kubernetes.io/projected/f2610081-f50c-441f-8b8a-bc2a236065f1-kube-api-access-99qz8\") pod \"nova-operator-controller-manager-5644b66645-f8h8s\" (UID: \"f2610081-f50c-441f-8b8a-bc2a236065f1\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.397979 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqhp\" (UniqueName: \"kubernetes.io/projected/f4b1920b-1fb0-4f10-a3fc-97d19aacc34e-kube-api-access-fpqhp\") pod \"manila-operator-controller-manager-7d96d95959-jxthf\" (UID: \"f4b1920b-1fb0-4f10-a3fc-97d19aacc34e\") " pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.398460 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw226\" (UniqueName: \"kubernetes.io/projected/437ef1c6-09b5-45c2-b88d-e42e432ae801-kube-api-access-dw226\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.406559 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9ld\" (UniqueName: \"kubernetes.io/projected/c814b622-e60d-492c-ae86-9e78b37297e4-kube-api-access-bx9ld\") pod \"neutron-operator-controller-manager-576995988b-rtzcv\" (UID: \"c814b622-e60d-492c-ae86-9e78b37297e4\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.423720 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.424870 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.443806 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czpbj\" (UniqueName: \"kubernetes.io/projected/1cd70b29-6ef8-4625-93eb-f7113200b385-kube-api-access-czpbj\") pod \"swift-operator-controller-manager-76864d4fdb-ps2jp\" (UID: \"1cd70b29-6ef8-4625-93eb-f7113200b385\") " pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.443886 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472lf\" (UniqueName: \"kubernetes.io/projected/bc56f00a-31c6-474b-af93-59442f956567-kube-api-access-472lf\") pod \"placement-operator-controller-manager-5b964cf4cd-zgmmx\" (UID: \"bc56f00a-31c6-474b-af93-59442f956567\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.443950 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcqn\" (UniqueName: \"kubernetes.io/projected/57aa7be3-f130-41b7-a400-1c2ddd1b8ce3-kube-api-access-xvcqn\") pod \"test-operator-controller-manager-56f8bfcd9f-dlpt5\" (UID: \"57aa7be3-f130-41b7-a400-1c2ddd1b8ce3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.444016 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/0c4b1bae-6cff-4914-907c-f6c9867a803b-kube-api-access-42595\") pod \"watcher-operator-controller-manager-586b95b788-pqmvf\" (UID: \"0c4b1bae-6cff-4914-907c-f6c9867a803b\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.444080 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8fh\" (UniqueName: \"kubernetes.io/projected/80924e89-7cef-4879-b955-28d3ef271729-kube-api-access-xp8fh\") pod \"telemetry-operator-controller-manager-8446785844-jtbmg\" (UID: \"80924e89-7cef-4879-b955-28d3ef271729\") " pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.457571 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.474447 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.478977 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcqn\" (UniqueName: \"kubernetes.io/projected/57aa7be3-f130-41b7-a400-1c2ddd1b8ce3-kube-api-access-xvcqn\") pod \"test-operator-controller-manager-56f8bfcd9f-dlpt5\" (UID: \"57aa7be3-f130-41b7-a400-1c2ddd1b8ce3\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.505996 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.519255 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.520670 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.530914 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.530926 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.534275 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.535737 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ngr7p" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.537370 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545703 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqrfm\" (UniqueName: \"kubernetes.io/projected/3808aa6d-1386-4e9a-81b2-e37c11246170-kube-api-access-wqrfm\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/0c4b1bae-6cff-4914-907c-f6c9867a803b-kube-api-access-42595\") pod \"watcher-operator-controller-manager-586b95b788-pqmvf\" (UID: \"0c4b1bae-6cff-4914-907c-f6c9867a803b\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545788 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8fh\" (UniqueName: \"kubernetes.io/projected/80924e89-7cef-4879-b955-28d3ef271729-kube-api-access-xp8fh\") pod \"telemetry-operator-controller-manager-8446785844-jtbmg\" (UID: \"80924e89-7cef-4879-b955-28d3ef271729\") " pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545848 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czpbj\" (UniqueName: \"kubernetes.io/projected/1cd70b29-6ef8-4625-93eb-f7113200b385-kube-api-access-czpbj\") pod \"swift-operator-controller-manager-76864d4fdb-ps2jp\" (UID: \"1cd70b29-6ef8-4625-93eb-f7113200b385\") " pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545881 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.545910 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-472lf\" (UniqueName: \"kubernetes.io/projected/bc56f00a-31c6-474b-af93-59442f956567-kube-api-access-472lf\") pod \"placement-operator-controller-manager-5b964cf4cd-zgmmx\" (UID: \"bc56f00a-31c6-474b-af93-59442f956567\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.549251 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.589671 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42595\" (UniqueName: \"kubernetes.io/projected/0c4b1bae-6cff-4914-907c-f6c9867a803b-kube-api-access-42595\") pod \"watcher-operator-controller-manager-586b95b788-pqmvf\" (UID: \"0c4b1bae-6cff-4914-907c-f6c9867a803b\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.590303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8fh\" (UniqueName: \"kubernetes.io/projected/80924e89-7cef-4879-b955-28d3ef271729-kube-api-access-xp8fh\") pod \"telemetry-operator-controller-manager-8446785844-jtbmg\" (UID: \"80924e89-7cef-4879-b955-28d3ef271729\") " pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.593315 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-472lf\" (UniqueName: \"kubernetes.io/projected/bc56f00a-31c6-474b-af93-59442f956567-kube-api-access-472lf\") pod \"placement-operator-controller-manager-5b964cf4cd-zgmmx\" (UID: \"bc56f00a-31c6-474b-af93-59442f956567\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.600267 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czpbj\" (UniqueName: \"kubernetes.io/projected/1cd70b29-6ef8-4625-93eb-f7113200b385-kube-api-access-czpbj\") pod \"swift-operator-controller-manager-76864d4fdb-ps2jp\" (UID: \"1cd70b29-6ef8-4625-93eb-f7113200b385\") " pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.600578 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.608596 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.609556 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.613395 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pd96g" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.619160 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.632604 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.648005 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.649637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqrfm\" (UniqueName: \"kubernetes.io/projected/3808aa6d-1386-4e9a-81b2-e37c11246170-kube-api-access-wqrfm\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.649693 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgp6l\" (UniqueName: \"kubernetes.io/projected/9ba40dc8-290a-4a40-a039-609874c181d4-kube-api-access-hgp6l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lmv8c\" (UID: \"9ba40dc8-290a-4a40-a039-609874c181d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.649827 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.649868 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.649995 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.650046 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:12.150030376 +0000 UTC m=+817.923359418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.650461 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.650499 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:12.150482449 +0000 UTC m=+817.923811481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.650750 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.650775 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert podName:8d42c163-9e7d-485f-b94e-4796166ba8f9 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:12.650767207 +0000 UTC m=+818.424096249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert") pod "infra-operator-controller-manager-79955696d6-66z2p" (UID: "8d42c163-9e7d-485f-b94e-4796166ba8f9") : secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.677373 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.678467 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqrfm\" (UniqueName: \"kubernetes.io/projected/3808aa6d-1386-4e9a-81b2-e37c11246170-kube-api-access-wqrfm\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.683313 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.744154 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.756357 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgp6l\" (UniqueName: \"kubernetes.io/projected/9ba40dc8-290a-4a40-a039-609874c181d4-kube-api-access-hgp6l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lmv8c\" (UID: \"9ba40dc8-290a-4a40-a039-609874c181d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.770801 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.784304 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.795293 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgp6l\" (UniqueName: \"kubernetes.io/projected/9ba40dc8-290a-4a40-a039-609874c181d4-kube-api-access-hgp6l\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lmv8c\" (UID: \"9ba40dc8-290a-4a40-a039-609874c181d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.842386 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k"] Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.858224 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.858437 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: E0131 15:12:11.858494 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert podName:437ef1c6-09b5-45c2-b88d-e42e432ae801 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:12.858477159 +0000 UTC m=+818.631806191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" (UID: "437ef1c6-09b5-45c2-b88d-e42e432ae801") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:11 crc kubenswrapper[4735]: W0131 15:12:11.883119 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0a68002_1422_44d3_8656_2901a42b42f4.slice/crio-c373cbe93114278dcd4b0ac8a65ad40d8ecd414d75e77214440db87460763a2e WatchSource:0}: Error finding container c373cbe93114278dcd4b0ac8a65ad40d8ecd414d75e77214440db87460763a2e: Status 404 returned error can't find the container with id c373cbe93114278dcd4b0ac8a65ad40d8ecd414d75e77214440db87460763a2e Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.921907 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" event={"ID":"a27712fb-eb89-49ff-b5a5-1432a0a4774f","Type":"ContainerStarted","Data":"f6325b6ec90a6bad1703c75ad7cd3298694881ff130051f61e807ee706c362fa"} Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.923393 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" event={"ID":"d0a68002-1422-44d3-8656-2901a42b42f4","Type":"ContainerStarted","Data":"c373cbe93114278dcd4b0ac8a65ad40d8ecd414d75e77214440db87460763a2e"} Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.946369 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" Jan 31 15:12:11 crc kubenswrapper[4735]: I0131 15:12:11.960015 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.162476 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.162900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.163038 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.163087 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:13.163075303 +0000 UTC m=+818.936404345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.163448 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.163481 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:13.163473254 +0000 UTC m=+818.936802296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.292860 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.308926 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef1d6c4a_a87c_4afc_9b80_65c5370a3c5d.slice/crio-078eec96923c9222cf4004276124547a52dec61974ba36586c27df7a7bf10cf0 WatchSource:0}: Error finding container 078eec96923c9222cf4004276124547a52dec61974ba36586c27df7a7bf10cf0: Status 404 returned error can't find the container with id 078eec96923c9222cf4004276124547a52dec61974ba36586c27df7a7bf10cf0 Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.311596 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.328990 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.367276 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.367760 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5926394d_8ab0_46d7_9bb6_1ea59a0d7511.slice/crio-7a715294e48d8fe6afb0082d5cb0377eff6156a74099ab48cfe23dce2d1aa1fc WatchSource:0}: Error finding container 7a715294e48d8fe6afb0082d5cb0377eff6156a74099ab48cfe23dce2d1aa1fc: Status 404 returned error can't find the container with id 7a715294e48d8fe6afb0082d5cb0377eff6156a74099ab48cfe23dce2d1aa1fc Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.371691 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4915a12_75dc_4b2e_a039_c98287c8cec4.slice/crio-7f2b71e3a905817cd78f30addf10ce98120c29aa9ef49fd4682b697dee9aee3d WatchSource:0}: Error finding container 7f2b71e3a905817cd78f30addf10ce98120c29aa9ef49fd4682b697dee9aee3d: Status 404 returned error can't find the container with id 7f2b71e3a905817cd78f30addf10ce98120c29aa9ef49fd4682b697dee9aee3d Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.382486 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.395264 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.487732 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.500783 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.503194 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc56f00a_31c6_474b_af93_59442f956567.slice/crio-a83603fa9dc8042966d62358a9b4d240da5f564aedef16bdf2e50f42a78b6c61 WatchSource:0}: Error finding container a83603fa9dc8042966d62358a9b4d240da5f564aedef16bdf2e50f42a78b6c61: Status 404 returned error can't find the container with id a83603fa9dc8042966d62358a9b4d240da5f564aedef16bdf2e50f42a78b6c61 Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.504855 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb469fe09_816f_4ffa_a61d_82e448011837.slice/crio-c6a23da4eb3403f6e0e231e8b81f48ed1afd5789c8fea0d4089372e3ec17e9e5 WatchSource:0}: Error finding container c6a23da4eb3403f6e0e231e8b81f48ed1afd5789c8fea0d4089372e3ec17e9e5: Status 404 returned error can't find the container with id c6a23da4eb3403f6e0e231e8b81f48ed1afd5789c8fea0d4089372e3ec17e9e5 Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.507005 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.512634 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.513020 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2610081_f50c_441f_8b8a_bc2a236065f1.slice/crio-334e8acca07747fc9f4f2b2fc5cc5f91e0f095ca0754ad7b886c7165934d121f WatchSource:0}: Error finding container 334e8acca07747fc9f4f2b2fc5cc5f91e0f095ca0754ad7b886c7165934d121f: Status 404 returned error can't find the container with id 334e8acca07747fc9f4f2b2fc5cc5f91e0f095ca0754ad7b886c7165934d121f Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.514814 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r67qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-thcqg_openstack-operators(8bc95764-b0cb-4206-af35-fefb00d8c71f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.515969 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" podUID="8bc95764-b0cb-4206-af35-fefb00d8c71f" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.524458 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvcgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7b89ddb58-f8f64_openstack-operators(b469fe09-816f-4ffa-a61d-82e448011837): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.525721 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" podUID="b469fe09-816f-4ffa-a61d-82e448011837" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.529626 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.546773 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.561186 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.627937 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.631539 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4b1bae_6cff_4914_907c_f6c9867a803b.slice/crio-99d0680d7cf9bbb00b6af3be89901e03520ec61484258fba90da9709b66b8e39 WatchSource:0}: Error finding container 99d0680d7cf9bbb00b6af3be89901e03520ec61484258fba90da9709b66b8e39: Status 404 returned error can't find the container with id 99d0680d7cf9bbb00b6af3be89901e03520ec61484258fba90da9709b66b8e39 Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.634987 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5"] Jan 31 15:12:12 crc kubenswrapper[4735]: W0131 15:12:12.635121 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa7be3_f130_41b7_a400_1c2ddd1b8ce3.slice/crio-676ba01499830234b6339407a0958c12541550a50c0f9965fbda1c642375a8b5 WatchSource:0}: Error finding container 676ba01499830234b6339407a0958c12541550a50c0f9965fbda1c642375a8b5: Status 404 returned error can't find the container with id 676ba01499830234b6339407a0958c12541550a50c0f9965fbda1c642375a8b5 Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.635242 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-pqmvf_openstack-operators(0c4b1bae-6cff-4914-907c-f6c9867a803b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.637152 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" podUID="0c4b1bae-6cff-4914-907c-f6c9867a803b" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.639105 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xvcqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-dlpt5_openstack-operators(57aa7be3-f130-41b7-a400-1c2ddd1b8ce3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.640260 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" podUID="57aa7be3-f130-41b7-a400-1c2ddd1b8ce3" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.649668 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg"] Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.655522 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c"] Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.661371 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:2b7b63bdb08d5b163e2477ca7afc8ca449e5ff6a39cef97f3e63c663e7994c71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp8fh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-8446785844-jtbmg_openstack-operators(80924e89-7cef-4879-b955-28d3ef271729): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.662983 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" podUID="80924e89-7cef-4879-b955-28d3ef271729" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.663027 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgp6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lmv8c_openstack-operators(9ba40dc8-290a-4a40-a039-609874c181d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.665716 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" podUID="9ba40dc8-290a-4a40-a039-609874c181d4" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.682365 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.682505 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.682558 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert podName:8d42c163-9e7d-485f-b94e-4796166ba8f9 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:14.682545122 +0000 UTC m=+820.455874164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert") pod "infra-operator-controller-manager-79955696d6-66z2p" (UID: "8d42c163-9e7d-485f-b94e-4796166ba8f9") : secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.886769 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.886877 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.886949 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert podName:437ef1c6-09b5-45c2-b88d-e42e432ae801 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:14.886934049 +0000 UTC m=+820.660263091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" (UID: "437ef1c6-09b5-45c2-b88d-e42e432ae801") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.947996 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" event={"ID":"f2610081-f50c-441f-8b8a-bc2a236065f1","Type":"ContainerStarted","Data":"334e8acca07747fc9f4f2b2fc5cc5f91e0f095ca0754ad7b886c7165934d121f"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.952649 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" event={"ID":"e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2","Type":"ContainerStarted","Data":"8072674ada943ca33e17c7f547d74098958b86e81b5dea6ed5961dae6a146222"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.955245 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" event={"ID":"0c4b1bae-6cff-4914-907c-f6c9867a803b","Type":"ContainerStarted","Data":"99d0680d7cf9bbb00b6af3be89901e03520ec61484258fba90da9709b66b8e39"} Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.956942 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" podUID="0c4b1bae-6cff-4914-907c-f6c9867a803b" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.958702 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" event={"ID":"ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d","Type":"ContainerStarted","Data":"078eec96923c9222cf4004276124547a52dec61974ba36586c27df7a7bf10cf0"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.961833 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" event={"ID":"5926394d-8ab0-46d7-9bb6-1ea59a0d7511","Type":"ContainerStarted","Data":"7a715294e48d8fe6afb0082d5cb0377eff6156a74099ab48cfe23dce2d1aa1fc"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.965119 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" event={"ID":"6cc9c424-b3f7-4744-92d8-5844915879bf","Type":"ContainerStarted","Data":"8b2ab5b0cd7af756ba449d9377c46dd0772df619338e00ef6212318e555132c8"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.974071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" event={"ID":"bc56f00a-31c6-474b-af93-59442f956567","Type":"ContainerStarted","Data":"a83603fa9dc8042966d62358a9b4d240da5f564aedef16bdf2e50f42a78b6c61"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.978667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" event={"ID":"57aa7be3-f130-41b7-a400-1c2ddd1b8ce3","Type":"ContainerStarted","Data":"676ba01499830234b6339407a0958c12541550a50c0f9965fbda1c642375a8b5"} Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.980594 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" podUID="57aa7be3-f130-41b7-a400-1c2ddd1b8ce3" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.980677 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" event={"ID":"8bc95764-b0cb-4206-af35-fefb00d8c71f","Type":"ContainerStarted","Data":"b7dafe2473121983411a56bea9748a4e8f8f05264320c1b0a733b7bbca7d6d11"} Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.982051 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" podUID="8bc95764-b0cb-4206-af35-fefb00d8c71f" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.982575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" event={"ID":"f4b1920b-1fb0-4f10-a3fc-97d19aacc34e","Type":"ContainerStarted","Data":"b7a476b74acf221ce9a717801be2798ebed82752cc42e903b9b60a780feffe97"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.984179 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" event={"ID":"b469fe09-816f-4ffa-a61d-82e448011837","Type":"ContainerStarted","Data":"c6a23da4eb3403f6e0e231e8b81f48ed1afd5789c8fea0d4089372e3ec17e9e5"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.985432 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" event={"ID":"627cef1f-bb76-4dd2-b7d1-b3f55bdeb335","Type":"ContainerStarted","Data":"73a02365d8df4dd042460f21a428f8d6439a28b83ec8b145931e4ed29d19e2a5"} Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.985284 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" podUID="b469fe09-816f-4ffa-a61d-82e448011837" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.987029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" event={"ID":"c814b622-e60d-492c-ae86-9e78b37297e4","Type":"ContainerStarted","Data":"df7b90042371b408b58d3e9bbd24659e4294d5be4c52bf4bff96d46371411378"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.989879 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" event={"ID":"0aef4ea8-3e5e-497e-b2bd-280d521e895f","Type":"ContainerStarted","Data":"2595dca0f27f63b7d2fa09031bc1b41072d1e444093c81a8d65d451ddffa576d"} Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.991157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" event={"ID":"9ba40dc8-290a-4a40-a039-609874c181d4","Type":"ContainerStarted","Data":"cdc61756d83a2a3f703aab784bca1a18a7b1a4a84f61cecb5b856abdcd4cdfa9"} Jan 31 15:12:12 crc kubenswrapper[4735]: E0131 15:12:12.993282 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" podUID="9ba40dc8-290a-4a40-a039-609874c181d4" Jan 31 15:12:12 crc kubenswrapper[4735]: I0131 15:12:12.997789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" event={"ID":"c4915a12-75dc-4b2e-a039-c98287c8cec4","Type":"ContainerStarted","Data":"7f2b71e3a905817cd78f30addf10ce98120c29aa9ef49fd4682b697dee9aee3d"} Jan 31 15:12:13 crc kubenswrapper[4735]: I0131 15:12:13.011960 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" event={"ID":"80924e89-7cef-4879-b955-28d3ef271729","Type":"ContainerStarted","Data":"b1e0438980553dd7e6769678076e0331be705f0a7ebc987286db7fc773b643fb"} Jan 31 15:12:13 crc kubenswrapper[4735]: E0131 15:12:13.013594 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:2b7b63bdb08d5b163e2477ca7afc8ca449e5ff6a39cef97f3e63c663e7994c71\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" podUID="80924e89-7cef-4879-b955-28d3ef271729" Jan 31 15:12:13 crc kubenswrapper[4735]: I0131 15:12:13.022507 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" event={"ID":"1cd70b29-6ef8-4625-93eb-f7113200b385","Type":"ContainerStarted","Data":"821f9cc51065f0c28dde7fc253fbd1d95fc5c4fcb6a46a28c9920445475d5b9e"} Jan 31 15:12:13 crc kubenswrapper[4735]: I0131 15:12:13.190935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:13 crc kubenswrapper[4735]: I0131 15:12:13.190995 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:13 crc kubenswrapper[4735]: E0131 15:12:13.191094 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:13 crc kubenswrapper[4735]: E0131 15:12:13.191110 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:13 crc kubenswrapper[4735]: E0131 15:12:13.191167 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:15.191151252 +0000 UTC m=+820.964480294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:13 crc kubenswrapper[4735]: E0131 15:12:13.191183 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:15.191176693 +0000 UTC m=+820.964505735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.064071 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" podUID="b469fe09-816f-4ffa-a61d-82e448011837" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.064489 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" podUID="57aa7be3-f130-41b7-a400-1c2ddd1b8ce3" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.065200 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:2b7b63bdb08d5b163e2477ca7afc8ca449e5ff6a39cef97f3e63c663e7994c71\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" podUID="80924e89-7cef-4879-b955-28d3ef271729" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.065626 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" podUID="9ba40dc8-290a-4a40-a039-609874c181d4" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.065655 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" podUID="0c4b1bae-6cff-4914-907c-f6c9867a803b" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.071648 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" podUID="8bc95764-b0cb-4206-af35-fefb00d8c71f" Jan 31 15:12:14 crc kubenswrapper[4735]: I0131 15:12:14.717989 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.718211 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.718359 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert podName:8d42c163-9e7d-485f-b94e-4796166ba8f9 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:18.718322053 +0000 UTC m=+824.491651095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert") pod "infra-operator-controller-manager-79955696d6-66z2p" (UID: "8d42c163-9e7d-485f-b94e-4796166ba8f9") : secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:14 crc kubenswrapper[4735]: I0131 15:12:14.921900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.922169 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:14 crc kubenswrapper[4735]: E0131 15:12:14.922299 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert podName:437ef1c6-09b5-45c2-b88d-e42e432ae801 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:18.922269767 +0000 UTC m=+824.695598959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" (UID: "437ef1c6-09b5-45c2-b88d-e42e432ae801") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:15 crc kubenswrapper[4735]: I0131 15:12:15.226935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:15 crc kubenswrapper[4735]: I0131 15:12:15.226994 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:15 crc kubenswrapper[4735]: E0131 15:12:15.227130 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:15 crc kubenswrapper[4735]: E0131 15:12:15.227147 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:15 crc kubenswrapper[4735]: E0131 15:12:15.227185 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:19.22717074 +0000 UTC m=+825.000499782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:15 crc kubenswrapper[4735]: E0131 15:12:15.227236 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:19.227214882 +0000 UTC m=+825.000543984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:18 crc kubenswrapper[4735]: I0131 15:12:18.788341 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:18 crc kubenswrapper[4735]: E0131 15:12:18.788535 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:18 crc kubenswrapper[4735]: E0131 15:12:18.789085 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert podName:8d42c163-9e7d-485f-b94e-4796166ba8f9 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:26.789063271 +0000 UTC m=+832.562392323 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert") pod "infra-operator-controller-manager-79955696d6-66z2p" (UID: "8d42c163-9e7d-485f-b94e-4796166ba8f9") : secret "infra-operator-webhook-server-cert" not found Jan 31 15:12:18 crc kubenswrapper[4735]: I0131 15:12:18.992372 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:18 crc kubenswrapper[4735]: E0131 15:12:18.992565 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:18 crc kubenswrapper[4735]: E0131 15:12:18.992657 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert podName:437ef1c6-09b5-45c2-b88d-e42e432ae801 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:26.992635575 +0000 UTC m=+832.765964677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" (UID: "437ef1c6-09b5-45c2-b88d-e42e432ae801") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 15:12:19 crc kubenswrapper[4735]: I0131 15:12:19.295541 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:19 crc kubenswrapper[4735]: I0131 15:12:19.295613 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:19 crc kubenswrapper[4735]: E0131 15:12:19.295703 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:19 crc kubenswrapper[4735]: E0131 15:12:19.295758 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:27.295740927 +0000 UTC m=+833.069069969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:19 crc kubenswrapper[4735]: E0131 15:12:19.295703 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:19 crc kubenswrapper[4735]: E0131 15:12:19.295923 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:27.295886421 +0000 UTC m=+833.069215554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:25 crc kubenswrapper[4735]: E0131 15:12:25.460487 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51" Jan 31 15:12:25 crc kubenswrapper[4735]: E0131 15:12:25.461691 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99qz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5644b66645-f8h8s_openstack-operators(f2610081-f50c-441f-8b8a-bc2a236065f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:12:25 crc kubenswrapper[4735]: E0131 15:12:25.462938 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" podUID="f2610081-f50c-441f-8b8a-bc2a236065f1" Jan 31 15:12:26 crc kubenswrapper[4735]: E0131 15:12:26.162088 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" podUID="f2610081-f50c-441f-8b8a-bc2a236065f1" Jan 31 15:12:26 crc kubenswrapper[4735]: I0131 15:12:26.811353 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:26 crc kubenswrapper[4735]: I0131 15:12:26.819668 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d42c163-9e7d-485f-b94e-4796166ba8f9-cert\") pod \"infra-operator-controller-manager-79955696d6-66z2p\" (UID: \"8d42c163-9e7d-485f-b94e-4796166ba8f9\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:26 crc kubenswrapper[4735]: I0131 15:12:26.958477 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.014105 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.018125 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/437ef1c6-09b5-45c2-b88d-e42e432ae801-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk\" (UID: \"437ef1c6-09b5-45c2-b88d-e42e432ae801\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.174221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" event={"ID":"1cd70b29-6ef8-4625-93eb-f7113200b385","Type":"ContainerStarted","Data":"96c5f4e82452af734fa59af7f8c89daf4204f921f20e0f3921618d5fe58f586e"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.174565 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.175518 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" event={"ID":"6cc9c424-b3f7-4744-92d8-5844915879bf","Type":"ContainerStarted","Data":"159f5966b5502a9aec87561254e22d12865015cb1019a485d14f4f20ac86d200"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.175647 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.176955 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" event={"ID":"c4915a12-75dc-4b2e-a039-c98287c8cec4","Type":"ContainerStarted","Data":"a6b2d2b2d89cad33a76189c19386e5070a72b62a82e3b4238448385fa7fe2e61"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.177328 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.178920 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.195598 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" event={"ID":"f4b1920b-1fb0-4f10-a3fc-97d19aacc34e","Type":"ContainerStarted","Data":"ceb3c832ac66461cafaa6e0d45b7e1148ace985395cf1624c804d1a0aae664c3"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.196212 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.211851 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" event={"ID":"a27712fb-eb89-49ff-b5a5-1432a0a4774f","Type":"ContainerStarted","Data":"3e4eb3bf436f1fbe431e15e9c8ba696b90ebb51758448eb4c9f42cca1ee2b903"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.212127 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.233820 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" event={"ID":"e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2","Type":"ContainerStarted","Data":"afdc24715b32f5b4c425e5e16e25500f7c06322b60e56c6bc3a4bad61aeca625"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.234564 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.248723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" event={"ID":"d0a68002-1422-44d3-8656-2901a42b42f4","Type":"ContainerStarted","Data":"b72e69b89ff0dd51fc002a2e82bb9024c2125a11e69e44fcff5171595dbb005a"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.249485 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.252061 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" event={"ID":"c814b622-e60d-492c-ae86-9e78b37297e4","Type":"ContainerStarted","Data":"21c7114e3ccefe31678b61867bfbfa845296947e9225875cd1ea1c5d05c09931"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.252699 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.267529 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" podStartSLOduration=2.8008129139999998 podStartE2EDuration="16.267511652s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.510269398 +0000 UTC m=+818.283598440" lastFinishedPulling="2026-01-31 15:12:25.976968096 +0000 UTC m=+831.750297178" observedRunningTime="2026-01-31 15:12:27.233890697 +0000 UTC m=+833.007219749" watchObservedRunningTime="2026-01-31 15:12:27.267511652 +0000 UTC m=+833.040840684" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.269484 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" podStartSLOduration=3.369474329 podStartE2EDuration="17.269475308s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:11.990807359 +0000 UTC m=+817.764136401" lastFinishedPulling="2026-01-31 15:12:25.890808308 +0000 UTC m=+831.664137380" observedRunningTime="2026-01-31 15:12:27.269006865 +0000 UTC m=+833.042335907" watchObservedRunningTime="2026-01-31 15:12:27.269475308 +0000 UTC m=+833.042804350" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.288099 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" event={"ID":"0aef4ea8-3e5e-497e-b2bd-280d521e895f","Type":"ContainerStarted","Data":"9e4d8dcd20d311e626238e21edd2c44bb878eb4b250e898ba7c5f21c31a94002"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.289179 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.307884 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" podStartSLOduration=5.148025893 podStartE2EDuration="17.307864539s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.484819685 +0000 UTC m=+818.258148727" lastFinishedPulling="2026-01-31 15:12:24.644658321 +0000 UTC m=+830.417987373" observedRunningTime="2026-01-31 15:12:27.288714385 +0000 UTC m=+833.062043427" watchObservedRunningTime="2026-01-31 15:12:27.307864539 +0000 UTC m=+833.081193581" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.317599 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.317909 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:27 crc kubenswrapper[4735]: E0131 15:12:27.318803 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 15:12:27 crc kubenswrapper[4735]: E0131 15:12:27.318913 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:43.318898012 +0000 UTC m=+849.092227054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "metrics-server-cert" not found Jan 31 15:12:27 crc kubenswrapper[4735]: E0131 15:12:27.319009 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 15:12:27 crc kubenswrapper[4735]: E0131 15:12:27.319082 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs podName:3808aa6d-1386-4e9a-81b2-e37c11246170 nodeName:}" failed. No retries permitted until 2026-01-31 15:12:43.319074997 +0000 UTC m=+849.092404039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs") pod "openstack-operator-controller-manager-55f4d66b54-gcks8" (UID: "3808aa6d-1386-4e9a-81b2-e37c11246170") : secret "webhook-server-cert" not found Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.327721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" event={"ID":"ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d","Type":"ContainerStarted","Data":"c1f6f2248fc18b342632aa6f188dbd34716d63f1e8a1ce3bc38e9735c7caf1e1"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.328448 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.348354 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" podStartSLOduration=5.038984575 podStartE2EDuration="17.348340989s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.335691598 +0000 UTC m=+818.109020640" lastFinishedPulling="2026-01-31 15:12:24.645047972 +0000 UTC m=+830.418377054" observedRunningTime="2026-01-31 15:12:27.34484731 +0000 UTC m=+833.118176352" watchObservedRunningTime="2026-01-31 15:12:27.348340989 +0000 UTC m=+833.121670031" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.354130 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" event={"ID":"5926394d-8ab0-46d7-9bb6-1ea59a0d7511","Type":"ContainerStarted","Data":"efff7d1eea4f24f046ef984f96c0ce1ac2b0b682b9712adf403f05e0b49901d2"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.354831 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.363134 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" event={"ID":"bc56f00a-31c6-474b-af93-59442f956567","Type":"ContainerStarted","Data":"a730a02f439b394be642a40a2fa78ac79239bd2c2be84d6eca81bc1a257386f0"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.363868 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.388788 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" podStartSLOduration=5.224705 podStartE2EDuration="17.388766677s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:11.725110129 +0000 UTC m=+817.498439171" lastFinishedPulling="2026-01-31 15:12:23.889171806 +0000 UTC m=+829.662500848" observedRunningTime="2026-01-31 15:12:27.38427141 +0000 UTC m=+833.157600462" watchObservedRunningTime="2026-01-31 15:12:27.388766677 +0000 UTC m=+833.162095719" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.412984 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" event={"ID":"627cef1f-bb76-4dd2-b7d1-b3f55bdeb335","Type":"ContainerStarted","Data":"2d449a46273ccc0917d3e6be91df1bd4e76d54afafdb97f9b1756f788e8bcda3"} Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.413853 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.460184 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" podStartSLOduration=3.945468084 podStartE2EDuration="17.460169266s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.376861037 +0000 UTC m=+818.150190079" lastFinishedPulling="2026-01-31 15:12:25.891562209 +0000 UTC m=+831.664891261" observedRunningTime="2026-01-31 15:12:27.432059028 +0000 UTC m=+833.205388080" watchObservedRunningTime="2026-01-31 15:12:27.460169266 +0000 UTC m=+833.233498308" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.545027 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" podStartSLOduration=4.006385256 podStartE2EDuration="17.545007917s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.347315378 +0000 UTC m=+818.120644420" lastFinishedPulling="2026-01-31 15:12:25.885938019 +0000 UTC m=+831.659267081" observedRunningTime="2026-01-31 15:12:27.492794193 +0000 UTC m=+833.266123235" watchObservedRunningTime="2026-01-31 15:12:27.545007917 +0000 UTC m=+833.318336959" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.547831 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" podStartSLOduration=3.971311809 podStartE2EDuration="17.547820917s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.314241698 +0000 UTC m=+818.087570740" lastFinishedPulling="2026-01-31 15:12:25.890750766 +0000 UTC m=+831.664079848" observedRunningTime="2026-01-31 15:12:27.545266634 +0000 UTC m=+833.318595676" watchObservedRunningTime="2026-01-31 15:12:27.547820917 +0000 UTC m=+833.321149959" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.674879 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" podStartSLOduration=4.284391704 podStartE2EDuration="17.674854226s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.371752092 +0000 UTC m=+818.145081134" lastFinishedPulling="2026-01-31 15:12:25.762214584 +0000 UTC m=+831.535543656" observedRunningTime="2026-01-31 15:12:27.601392529 +0000 UTC m=+833.374721571" watchObservedRunningTime="2026-01-31 15:12:27.674854226 +0000 UTC m=+833.448183268" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.727897 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" podStartSLOduration=4.478979743 podStartE2EDuration="17.727883913s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.51141558 +0000 UTC m=+818.284744622" lastFinishedPulling="2026-01-31 15:12:25.76031974 +0000 UTC m=+831.533648792" observedRunningTime="2026-01-31 15:12:27.644915225 +0000 UTC m=+833.418244267" watchObservedRunningTime="2026-01-31 15:12:27.727883913 +0000 UTC m=+833.501212945" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.748179 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" podStartSLOduration=4.632294049 podStartE2EDuration="17.748156009s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.37800744 +0000 UTC m=+818.151336482" lastFinishedPulling="2026-01-31 15:12:25.49386941 +0000 UTC m=+831.267198442" observedRunningTime="2026-01-31 15:12:27.725917927 +0000 UTC m=+833.499246959" watchObservedRunningTime="2026-01-31 15:12:27.748156009 +0000 UTC m=+833.521485051" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.800640 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" podStartSLOduration=4.30256814 podStartE2EDuration="17.800618139s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:11.887260016 +0000 UTC m=+817.660589058" lastFinishedPulling="2026-01-31 15:12:25.385310015 +0000 UTC m=+831.158639057" observedRunningTime="2026-01-31 15:12:27.7939409 +0000 UTC m=+833.567269942" watchObservedRunningTime="2026-01-31 15:12:27.800618139 +0000 UTC m=+833.573947181" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.961712 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" podStartSLOduration=4.93678323 podStartE2EDuration="17.961692546s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.507923291 +0000 UTC m=+818.281252333" lastFinishedPulling="2026-01-31 15:12:25.532832607 +0000 UTC m=+831.306161649" observedRunningTime="2026-01-31 15:12:27.822309476 +0000 UTC m=+833.595638528" watchObservedRunningTime="2026-01-31 15:12:27.961692546 +0000 UTC m=+833.735021588" Jan 31 15:12:27 crc kubenswrapper[4735]: I0131 15:12:27.962963 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-66z2p"] Jan 31 15:12:28 crc kubenswrapper[4735]: I0131 15:12:28.157814 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk"] Jan 31 15:12:28 crc kubenswrapper[4735]: W0131 15:12:28.166173 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod437ef1c6_09b5_45c2_b88d_e42e432ae801.slice/crio-addef884d24b22edfee09fb591846bf56cb689cc61e0c6541715cd9cff6aff74 WatchSource:0}: Error finding container addef884d24b22edfee09fb591846bf56cb689cc61e0c6541715cd9cff6aff74: Status 404 returned error can't find the container with id addef884d24b22edfee09fb591846bf56cb689cc61e0c6541715cd9cff6aff74 Jan 31 15:12:28 crc kubenswrapper[4735]: I0131 15:12:28.421943 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" event={"ID":"8d42c163-9e7d-485f-b94e-4796166ba8f9","Type":"ContainerStarted","Data":"e43e1395a62d292c0b2d8603c77f560ad636b4560a4af9d062b14c9514e3b516"} Jan 31 15:12:28 crc kubenswrapper[4735]: I0131 15:12:28.424357 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" event={"ID":"437ef1c6-09b5-45c2-b88d-e42e432ae801","Type":"ContainerStarted","Data":"addef884d24b22edfee09fb591846bf56cb689cc61e0c6541715cd9cff6aff74"} Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.037257 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-bxf2k" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.091919 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64d858bbbd-k4bh2" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.187453 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-r7xlm" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.239287 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-qh6xs" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.241773 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-bb6l7" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.379786 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-hzrws" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.426760 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-rfrcn" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.462038 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7v4mn" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.477926 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-rtzcv" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.510195 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7d96d95959-jxthf" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.635570 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-zgmmx" Jan 31 15:12:31 crc kubenswrapper[4735]: I0131 15:12:31.681759 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-76864d4fdb-ps2jp" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.519989 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" event={"ID":"0c4b1bae-6cff-4914-907c-f6c9867a803b","Type":"ContainerStarted","Data":"ae5e93b9df5fed737be5cccefaf5b638ccb4d46fc937a2aff4585014203e5cbe"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.520666 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.525553 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" event={"ID":"9ba40dc8-290a-4a40-a039-609874c181d4","Type":"ContainerStarted","Data":"365866c73b4e68e450ffb1c9ec32fb023501deb7b2cbc8ab5544767e0d39108d"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.527695 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" event={"ID":"b469fe09-816f-4ffa-a61d-82e448011837","Type":"ContainerStarted","Data":"5d883015b70dd05b03624b8912fedec9723b31cb74b278974b03f55140ac8c5a"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.528054 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.529440 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" event={"ID":"80924e89-7cef-4879-b955-28d3ef271729","Type":"ContainerStarted","Data":"a8dab962373ea35c0cb7c24cd5eea23f9255be7ee9c935cd6b320a513265e474"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.529667 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.531132 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" event={"ID":"f2610081-f50c-441f-8b8a-bc2a236065f1","Type":"ContainerStarted","Data":"b14888f671ad158205530c00997d486ae800ce5398159a272ba63fd8bc658b77"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.531412 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.532775 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" event={"ID":"437ef1c6-09b5-45c2-b88d-e42e432ae801","Type":"ContainerStarted","Data":"dc1b60e0db8398c821e75562cd789ee16eb8264393c565d1fcd0eff6b0bf8d80"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.532897 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.534892 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" event={"ID":"8bc95764-b0cb-4206-af35-fefb00d8c71f","Type":"ContainerStarted","Data":"f08667f295b16d5e002021c4d389c61ba464bf8f4710f3c278c486c2a1f58974"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.535163 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.536575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" event={"ID":"57aa7be3-f130-41b7-a400-1c2ddd1b8ce3","Type":"ContainerStarted","Data":"0c045a0da9ec173a4c94399dec0dbf2eae586d27e15f28d5b26c75c19653cf55"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.536809 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.538160 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" event={"ID":"8d42c163-9e7d-485f-b94e-4796166ba8f9","Type":"ContainerStarted","Data":"629b53b1fbd3b33315085c94288d064553261e8fa24905c566568c007e7a19ab"} Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.538283 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.544279 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" podStartSLOduration=2.658989106 podStartE2EDuration="29.544264882s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.634904789 +0000 UTC m=+818.408233831" lastFinishedPulling="2026-01-31 15:12:39.520180565 +0000 UTC m=+845.293509607" observedRunningTime="2026-01-31 15:12:40.539692611 +0000 UTC m=+846.313021673" watchObservedRunningTime="2026-01-31 15:12:40.544264882 +0000 UTC m=+846.317593924" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.556162 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" podStartSLOduration=4.012500506 podStartE2EDuration="30.556144111s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.51457327 +0000 UTC m=+818.287902312" lastFinishedPulling="2026-01-31 15:12:39.058216865 +0000 UTC m=+844.831545917" observedRunningTime="2026-01-31 15:12:40.552759214 +0000 UTC m=+846.326088266" watchObservedRunningTime="2026-01-31 15:12:40.556144111 +0000 UTC m=+846.329473153" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.567007 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" podStartSLOduration=19.039021537 podStartE2EDuration="30.56698309s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:28.005520041 +0000 UTC m=+833.778849083" lastFinishedPulling="2026-01-31 15:12:39.533481594 +0000 UTC m=+845.306810636" observedRunningTime="2026-01-31 15:12:40.563261064 +0000 UTC m=+846.336590126" watchObservedRunningTime="2026-01-31 15:12:40.56698309 +0000 UTC m=+846.340312142" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.582309 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" podStartSLOduration=2.6584437 podStartE2EDuration="29.582285407s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.638961174 +0000 UTC m=+818.412290216" lastFinishedPulling="2026-01-31 15:12:39.562802861 +0000 UTC m=+845.336131923" observedRunningTime="2026-01-31 15:12:40.576868032 +0000 UTC m=+846.350197104" watchObservedRunningTime="2026-01-31 15:12:40.582285407 +0000 UTC m=+846.355614449" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.606610 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lmv8c" podStartSLOduration=2.707459885 podStartE2EDuration="29.60658837s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.662924035 +0000 UTC m=+818.436253087" lastFinishedPulling="2026-01-31 15:12:39.56205253 +0000 UTC m=+845.335381572" observedRunningTime="2026-01-31 15:12:40.604808619 +0000 UTC m=+846.378137671" watchObservedRunningTime="2026-01-31 15:12:40.60658837 +0000 UTC m=+846.379917412" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.627978 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" podStartSLOduration=3.642807505 podStartE2EDuration="30.62795929s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.548402131 +0000 UTC m=+818.321731173" lastFinishedPulling="2026-01-31 15:12:39.533553916 +0000 UTC m=+845.306882958" observedRunningTime="2026-01-31 15:12:40.624640885 +0000 UTC m=+846.397969927" watchObservedRunningTime="2026-01-31 15:12:40.62795929 +0000 UTC m=+846.401288332" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.649530 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" podStartSLOduration=2.750359959 podStartE2EDuration="29.649512554s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.661213486 +0000 UTC m=+818.434542538" lastFinishedPulling="2026-01-31 15:12:39.560366091 +0000 UTC m=+845.333695133" observedRunningTime="2026-01-31 15:12:40.644703477 +0000 UTC m=+846.418032519" watchObservedRunningTime="2026-01-31 15:12:40.649512554 +0000 UTC m=+846.422841596" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.669852 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" podStartSLOduration=19.274196666 podStartE2EDuration="30.669834974s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:28.168384218 +0000 UTC m=+833.941713250" lastFinishedPulling="2026-01-31 15:12:39.564022516 +0000 UTC m=+845.337351558" observedRunningTime="2026-01-31 15:12:40.66653759 +0000 UTC m=+846.439866632" watchObservedRunningTime="2026-01-31 15:12:40.669834974 +0000 UTC m=+846.443164016" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.690239 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" podStartSLOduration=3.632201124 podStartE2EDuration="30.690213116s" podCreationTimestamp="2026-01-31 15:12:10 +0000 UTC" firstStartedPulling="2026-01-31 15:12:12.524268465 +0000 UTC m=+818.297597507" lastFinishedPulling="2026-01-31 15:12:39.582280467 +0000 UTC m=+845.355609499" observedRunningTime="2026-01-31 15:12:40.681826956 +0000 UTC m=+846.455155998" watchObservedRunningTime="2026-01-31 15:12:40.690213116 +0000 UTC m=+846.463542158" Jan 31 15:12:40 crc kubenswrapper[4735]: I0131 15:12:40.973248 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-787499fbb-drsgx" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.363759 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.364140 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.372814 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-webhook-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.372905 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3808aa6d-1386-4e9a-81b2-e37c11246170-metrics-certs\") pod \"openstack-operator-controller-manager-55f4d66b54-gcks8\" (UID: \"3808aa6d-1386-4e9a-81b2-e37c11246170\") " pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.657642 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ngr7p" Jan 31 15:12:43 crc kubenswrapper[4735]: I0131 15:12:43.665188 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:44 crc kubenswrapper[4735]: I0131 15:12:44.143920 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8"] Jan 31 15:12:44 crc kubenswrapper[4735]: W0131 15:12:44.149655 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3808aa6d_1386_4e9a_81b2_e37c11246170.slice/crio-d2b438f5a632ab00053d304bdbeaf39e75892e493a6bbe271eb52c49122d0b1e WatchSource:0}: Error finding container d2b438f5a632ab00053d304bdbeaf39e75892e493a6bbe271eb52c49122d0b1e: Status 404 returned error can't find the container with id d2b438f5a632ab00053d304bdbeaf39e75892e493a6bbe271eb52c49122d0b1e Jan 31 15:12:44 crc kubenswrapper[4735]: I0131 15:12:44.570755 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" event={"ID":"3808aa6d-1386-4e9a-81b2-e37c11246170","Type":"ContainerStarted","Data":"0bea56b758b654e01f92ef39b96fc3927fc452378fef4f4ea8aafae3cf99b7ea"} Jan 31 15:12:44 crc kubenswrapper[4735]: I0131 15:12:44.571399 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:12:44 crc kubenswrapper[4735]: I0131 15:12:44.571443 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" event={"ID":"3808aa6d-1386-4e9a-81b2-e37c11246170","Type":"ContainerStarted","Data":"d2b438f5a632ab00053d304bdbeaf39e75892e493a6bbe271eb52c49122d0b1e"} Jan 31 15:12:44 crc kubenswrapper[4735]: I0131 15:12:44.613654 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" podStartSLOduration=33.613629748 podStartE2EDuration="33.613629748s" podCreationTimestamp="2026-01-31 15:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:12:44.61124651 +0000 UTC m=+850.384575602" watchObservedRunningTime="2026-01-31 15:12:44.613629748 +0000 UTC m=+850.386958800" Jan 31 15:12:46 crc kubenswrapper[4735]: I0131 15:12:46.968865 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-66z2p" Jan 31 15:12:47 crc kubenswrapper[4735]: I0131 15:12:47.187520 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.552299 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-f8h8s" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.553510 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-f8f64" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.605560 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-thcqg" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.747811 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-8446785844-jtbmg" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.775997 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dlpt5" Jan 31 15:12:51 crc kubenswrapper[4735]: I0131 15:12:51.788336 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-pqmvf" Jan 31 15:12:53 crc kubenswrapper[4735]: I0131 15:12:53.672708 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-55f4d66b54-gcks8" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.425851 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.452508 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.468702 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.470560 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.470727 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.470839 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltcv\" (UniqueName: \"kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.571837 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.571978 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltcv\" (UniqueName: \"kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.572023 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.572622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.572766 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.607041 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nltcv\" (UniqueName: \"kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv\") pod \"community-operators-p9bcg\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:06 crc kubenswrapper[4735]: I0131 15:13:06.799569 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:07 crc kubenswrapper[4735]: I0131 15:13:07.285145 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:07 crc kubenswrapper[4735]: I0131 15:13:07.774017 4735 generic.go:334] "Generic (PLEG): container finished" podID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerID="3a747406b4760ce00ea9b9e9e17461f0a9e72c906aaeb7135c410b5b5e640f56" exitCode=0 Jan 31 15:13:07 crc kubenswrapper[4735]: I0131 15:13:07.774061 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerDied","Data":"3a747406b4760ce00ea9b9e9e17461f0a9e72c906aaeb7135c410b5b5e640f56"} Jan 31 15:13:07 crc kubenswrapper[4735]: I0131 15:13:07.774285 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerStarted","Data":"9607c11a16f8bc778bd66a2583842d706fe849980ee26e7f5fc25114863b74a3"} Jan 31 15:13:07 crc kubenswrapper[4735]: I0131 15:13:07.776072 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:13:08 crc kubenswrapper[4735]: I0131 15:13:08.783161 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerStarted","Data":"aab76e1f019ff58ffd8d10826b431c0560c2e04856fd09f4281c298aeaaa66d9"} Jan 31 15:13:09 crc kubenswrapper[4735]: I0131 15:13:09.795011 4735 generic.go:334] "Generic (PLEG): container finished" podID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerID="aab76e1f019ff58ffd8d10826b431c0560c2e04856fd09f4281c298aeaaa66d9" exitCode=0 Jan 31 15:13:09 crc kubenswrapper[4735]: I0131 15:13:09.795081 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerDied","Data":"aab76e1f019ff58ffd8d10826b431c0560c2e04856fd09f4281c298aeaaa66d9"} Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.396828 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.398892 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.409371 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.443757 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmpv\" (UniqueName: \"kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.443841 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.443895 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.544769 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.544851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmpv\" (UniqueName: \"kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.544884 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.545457 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.545529 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.567422 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmpv\" (UniqueName: \"kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv\") pod \"certified-operators-qxj9w\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:10 crc kubenswrapper[4735]: I0131 15:13:10.755557 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.704018 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.705559 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.711563 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.712590 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.712784 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.713876 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-m88f4" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.726822 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.752548 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.753725 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.756184 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.762757 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.762864 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8hcm\" (UniqueName: \"kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.763138 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5qgr\" (UniqueName: \"kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.763297 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.763348 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.770204 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.864763 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8hcm\" (UniqueName: \"kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.864838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5qgr\" (UniqueName: \"kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.864891 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.864917 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.864950 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.865840 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.866010 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.866125 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.885096 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5qgr\" (UniqueName: \"kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr\") pod \"dnsmasq-dns-675f4bcbfc-jp8d7\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:11.885515 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8hcm\" (UniqueName: \"kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm\") pod \"dnsmasq-dns-78dd6ddcc-2dzxs\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:12.022123 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:12.073990 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.693722 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.741313 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.742660 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.766357 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.808336 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.808413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfdpf\" (UniqueName: \"kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.808511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.910148 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.910220 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfdpf\" (UniqueName: \"kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.910281 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.911077 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.911103 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.930453 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfdpf\" (UniqueName: \"kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf\") pod \"dnsmasq-dns-666b6646f7-hd9g8\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.992715 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:14.994051 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.011279 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.011531 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spd48\" (UniqueName: \"kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.011594 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.011618 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.015689 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.051181 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.052706 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.068094 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.080012 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113115 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xms6\" (UniqueName: \"kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113195 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spd48\" (UniqueName: \"kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113279 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113302 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113321 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113762 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.113870 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.136161 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spd48\" (UniqueName: \"kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48\") pod \"redhat-operators-ggrqd\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.214896 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.214954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.215001 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xms6\" (UniqueName: \"kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.215975 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.216225 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.232723 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xms6\" (UniqueName: \"kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6\") pod \"dnsmasq-dns-57d769cc4f-bmmnk\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.308591 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.365759 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.840620 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.877083 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerStarted","Data":"60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc"} Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.893795 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:15 crc kubenswrapper[4735]: W0131 15:13:15.895791 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34a19c2d_fb87_483b_b56c_948f1ba8f2a0.slice/crio-40a4097700c87e745febc4c4eef20bb98022e8f88780fbcc003bf3863f576deb WatchSource:0}: Error finding container 40a4097700c87e745febc4c4eef20bb98022e8f88780fbcc003bf3863f576deb: Status 404 returned error can't find the container with id 40a4097700c87e745febc4c4eef20bb98022e8f88780fbcc003bf3863f576deb Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.901124 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9bcg" podStartSLOduration=2.339916445 podStartE2EDuration="9.901108641s" podCreationTimestamp="2026-01-31 15:13:06 +0000 UTC" firstStartedPulling="2026-01-31 15:13:07.775630567 +0000 UTC m=+873.548959649" lastFinishedPulling="2026-01-31 15:13:15.336822803 +0000 UTC m=+881.110151845" observedRunningTime="2026-01-31 15:13:15.897405476 +0000 UTC m=+881.670734538" watchObservedRunningTime="2026-01-31 15:13:15.901108641 +0000 UTC m=+881.674437683" Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.919477 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.930318 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:15 crc kubenswrapper[4735]: W0131 15:13:15.935502 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a481f1_ff58_488e_93f1_e8792bc8feaf.slice/crio-0395fe1b88383e35e29a3d4b0026f304104e367d28bdfb3bab5e646f7190bc7d WatchSource:0}: Error finding container 0395fe1b88383e35e29a3d4b0026f304104e367d28bdfb3bab5e646f7190bc7d: Status 404 returned error can't find the container with id 0395fe1b88383e35e29a3d4b0026f304104e367d28bdfb3bab5e646f7190bc7d Jan 31 15:13:15 crc kubenswrapper[4735]: W0131 15:13:15.935967 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3fda1c7_1f92_49b1_b3cd_fb5848208e67.slice/crio-5dc5cf07ed1caaf507969287e3bf6cc6d6b2adf028ad3663a296d1523a757a40 WatchSource:0}: Error finding container 5dc5cf07ed1caaf507969287e3bf6cc6d6b2adf028ad3663a296d1523a757a40: Status 404 returned error can't find the container with id 5dc5cf07ed1caaf507969287e3bf6cc6d6b2adf028ad3663a296d1523a757a40 Jan 31 15:13:15 crc kubenswrapper[4735]: I0131 15:13:15.973485 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.017814 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.021466 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.023385 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.031195 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-75jv5" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.049403 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.050467 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.051859 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.052221 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.052510 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.054551 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.057658 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140232 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140286 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140308 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140328 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140365 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140450 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140485 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140536 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140569 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkkh\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.140591 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.157409 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.163229 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.166628 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.166857 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.166876 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wm87z" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.166992 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.167299 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.167564 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.169763 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.171634 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241283 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241469 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241571 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241666 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdkkh\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241731 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241797 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px668\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241864 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.241930 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242003 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242077 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242142 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242281 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242350 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242587 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242667 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242837 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242906 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.242971 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.243045 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.243389 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.244820 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.245685 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.245783 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.247345 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.246103 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.249274 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.253694 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.254497 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.265275 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdkkh\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.271928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.292730 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344516 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344559 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px668\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344580 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344600 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344679 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.344756 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.345039 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.345239 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.345271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.345758 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.346101 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.347617 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.348730 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.349695 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.351207 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.354030 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.354106 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.366864 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.369886 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px668\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.386585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.425731 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.579889 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.800124 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.800665 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.898520 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" event={"ID":"d3fda1c7-1f92-49b1-b3cd-fb5848208e67","Type":"ContainerStarted","Data":"5dc5cf07ed1caaf507969287e3bf6cc6d6b2adf028ad3663a296d1523a757a40"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.919645 4735 generic.go:334] "Generic (PLEG): container finished" podID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerID="bff10f910b7ceb5176578efa530bdadbdbbb0f32fd2d66540198fa6e556918b1" exitCode=0 Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.919730 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerDied","Data":"bff10f910b7ceb5176578efa530bdadbdbbb0f32fd2d66540198fa6e556918b1"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.920448 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerStarted","Data":"0395fe1b88383e35e29a3d4b0026f304104e367d28bdfb3bab5e646f7190bc7d"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.928772 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.931069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" event={"ID":"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9","Type":"ContainerStarted","Data":"7b9ef1d827cbccbc0be68c630334eb9d0c9b76565475b2bf5a9649e96445694f"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.944885 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerID="e5914767c2f7eee2a2beb8c2c9c12f1e5e40e44a67ee1de387463a447ef5d588" exitCode=0 Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.944970 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerDied","Data":"e5914767c2f7eee2a2beb8c2c9c12f1e5e40e44a67ee1de387463a447ef5d588"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.945014 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerStarted","Data":"f0407f6ac34d49bedb62a3e1799de7d742fffd79c772803b567911aa79aa8e80"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.951546 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" event={"ID":"e93d7768-7f1e-467c-97df-29645555a92a","Type":"ContainerStarted","Data":"d29dbad05ac3dd77b6d0d9bb05639a19794be197a007034fa4f92a215df54b78"} Jan 31 15:13:16 crc kubenswrapper[4735]: I0131 15:13:16.953648 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" event={"ID":"34a19c2d-fb87-483b-b56c-948f1ba8f2a0","Type":"ContainerStarted","Data":"40a4097700c87e745febc4c4eef20bb98022e8f88780fbcc003bf3863f576deb"} Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.114367 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:13:17 crc kubenswrapper[4735]: W0131 15:13:17.127959 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aad2308_9cbb_48a2_99cc_7556caf884a5.slice/crio-04a9a77dedbb83250b4ac68dcb5b315145fd54ee59f6df6052839fa46abadeaa WatchSource:0}: Error finding container 04a9a77dedbb83250b4ac68dcb5b315145fd54ee59f6df6052839fa46abadeaa: Status 404 returned error can't find the container with id 04a9a77dedbb83250b4ac68dcb5b315145fd54ee59f6df6052839fa46abadeaa Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.398937 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.400053 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.447735 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.452219 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-krvtk" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.452712 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.453806 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.453952 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466267 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466346 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466448 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2dn\" (UniqueName: \"kubernetes.io/projected/8f2fd0fe-2906-4934-b08b-27032a482331-kube-api-access-tx2dn\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466474 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.466521 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.492734 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.570998 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.571072 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.571209 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572054 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572110 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-default\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572181 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572237 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2dn\" (UniqueName: \"kubernetes.io/projected/8f2fd0fe-2906-4934-b08b-27032a482331-kube-api-access-tx2dn\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572671 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572863 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f2fd0fe-2906-4934-b08b-27032a482331-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.572268 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-kolla-config\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.573676 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.573876 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f2fd0fe-2906-4934-b08b-27032a482331-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.602303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.602664 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2fd0fe-2906-4934-b08b-27032a482331-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.607292 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2dn\" (UniqueName: \"kubernetes.io/projected/8f2fd0fe-2906-4934-b08b-27032a482331-kube-api-access-tx2dn\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.639949 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"8f2fd0fe-2906-4934-b08b-27032a482331\") " pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.772681 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.860811 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9bcg" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" probeResult="failure" output=< Jan 31 15:13:17 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:13:17 crc kubenswrapper[4735]: > Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.991731 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerStarted","Data":"04a9a77dedbb83250b4ac68dcb5b315145fd54ee59f6df6052839fa46abadeaa"} Jan 31 15:13:17 crc kubenswrapper[4735]: I0131 15:13:17.994332 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerStarted","Data":"e84e4feff54b1ce4dbb3a5053e76ca683dc5808bcf2caa8fc8c0ea57f2f500b3"} Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.098048 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 15:13:18 crc kubenswrapper[4735]: W0131 15:13:18.116238 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2fd0fe_2906_4934_b08b_27032a482331.slice/crio-1a1aca241bcab17884ac3b3c7e03fdc9a0df57b46b61aeac186d5293f8810cbd WatchSource:0}: Error finding container 1a1aca241bcab17884ac3b3c7e03fdc9a0df57b46b61aeac186d5293f8810cbd: Status 404 returned error can't find the container with id 1a1aca241bcab17884ac3b3c7e03fdc9a0df57b46b61aeac186d5293f8810cbd Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.705585 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.710067 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.713405 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.714063 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.714252 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.714431 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-56q7d" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.714993 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.794463 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.794776 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.794832 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.794862 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.794967 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.795098 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjnv\" (UniqueName: \"kubernetes.io/projected/5616e745-0304-4987-bc98-aaa42fc5f6ea-kube-api-access-gkjnv\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.795151 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.795184 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896118 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896160 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896179 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896253 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896324 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjnv\" (UniqueName: \"kubernetes.io/projected/5616e745-0304-4987-bc98-aaa42fc5f6ea-kube-api-access-gkjnv\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896351 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896371 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.896414 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.897101 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.898826 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.899081 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.900353 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616e745-0304-4987-bc98-aaa42fc5f6ea-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.912044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5616e745-0304-4987-bc98-aaa42fc5f6ea-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.917442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.921791 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjnv\" (UniqueName: \"kubernetes.io/projected/5616e745-0304-4987-bc98-aaa42fc5f6ea-kube-api-access-gkjnv\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.925574 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5616e745-0304-4987-bc98-aaa42fc5f6ea-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:18 crc kubenswrapper[4735]: I0131 15:13:18.930631 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5616e745-0304-4987-bc98-aaa42fc5f6ea\") " pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.019307 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f2fd0fe-2906-4934-b08b-27032a482331","Type":"ContainerStarted","Data":"1a1aca241bcab17884ac3b3c7e03fdc9a0df57b46b61aeac186d5293f8810cbd"} Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.039807 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.043298 4735 generic.go:334] "Generic (PLEG): container finished" podID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerID="14d195ea6a0c1c4e87d4034846b34f1938f682a8a7a38f221a4713446cba0e07" exitCode=0 Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.043362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerDied","Data":"14d195ea6a0c1c4e87d4034846b34f1938f682a8a7a38f221a4713446cba0e07"} Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.065129 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerStarted","Data":"c9cd616bc406818f65ff998f70bddd8bc2a7ddb69233914487f3fa5676fa96f9"} Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.185207 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.186969 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.193580 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.195078 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.195233 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tl6kw" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.202631 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.305572 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-config-data\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.305902 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqt4b\" (UniqueName: \"kubernetes.io/projected/bb53ef9b-e389-4e78-a677-5def022eab7e-kube-api-access-gqt4b\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.305939 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.305976 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.306020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-kolla-config\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.407173 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.407224 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.407264 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-kolla-config\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.407330 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-config-data\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.407353 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqt4b\" (UniqueName: \"kubernetes.io/projected/bb53ef9b-e389-4e78-a677-5def022eab7e-kube-api-access-gqt4b\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.408289 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-config-data\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.408743 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bb53ef9b-e389-4e78-a677-5def022eab7e-kolla-config\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.418676 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.420163 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb53ef9b-e389-4e78-a677-5def022eab7e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.443150 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqt4b\" (UniqueName: \"kubernetes.io/projected/bb53ef9b-e389-4e78-a677-5def022eab7e-kube-api-access-gqt4b\") pod \"memcached-0\" (UID: \"bb53ef9b-e389-4e78-a677-5def022eab7e\") " pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.537286 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 15:13:19 crc kubenswrapper[4735]: I0131 15:13:19.674380 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.009371 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.082742 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerStarted","Data":"ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6"} Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.101988 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerID="c9cd616bc406818f65ff998f70bddd8bc2a7ddb69233914487f3fa5676fa96f9" exitCode=0 Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.102074 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerDied","Data":"c9cd616bc406818f65ff998f70bddd8bc2a7ddb69233914487f3fa5676fa96f9"} Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.104127 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bb53ef9b-e389-4e78-a677-5def022eab7e","Type":"ContainerStarted","Data":"308cf8a66d7ee0b4ee0083e3481e619b642a06af2248174835907061fd455be7"} Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.107190 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5616e745-0304-4987-bc98-aaa42fc5f6ea","Type":"ContainerStarted","Data":"ec3e3100b84413e2b768e7d58e4032ca950aa56b7ba1affef2d16f0fd762093d"} Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.110089 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxj9w" podStartSLOduration=7.444979317 podStartE2EDuration="10.11007328s" podCreationTimestamp="2026-01-31 15:13:10 +0000 UTC" firstStartedPulling="2026-01-31 15:13:16.92349308 +0000 UTC m=+882.696822122" lastFinishedPulling="2026-01-31 15:13:19.588587043 +0000 UTC m=+885.361916085" observedRunningTime="2026-01-31 15:13:20.10163145 +0000 UTC m=+885.874960502" watchObservedRunningTime="2026-01-31 15:13:20.11007328 +0000 UTC m=+885.883402322" Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.756183 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:20 crc kubenswrapper[4735]: I0131 15:13:20.756644 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.122316 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.123189 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.130968 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tk9vv" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.136267 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.156237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerStarted","Data":"8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6"} Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.194157 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggrqd" podStartSLOduration=4.404697026 podStartE2EDuration="7.194141377s" podCreationTimestamp="2026-01-31 15:13:14 +0000 UTC" firstStartedPulling="2026-01-31 15:13:16.976764919 +0000 UTC m=+882.750093961" lastFinishedPulling="2026-01-31 15:13:19.76620927 +0000 UTC m=+885.539538312" observedRunningTime="2026-01-31 15:13:21.191400269 +0000 UTC m=+886.964729311" watchObservedRunningTime="2026-01-31 15:13:21.194141377 +0000 UTC m=+886.967470409" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.252648 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7dm8\" (UniqueName: \"kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8\") pod \"kube-state-metrics-0\" (UID: \"bf00fd0b-9de0-4726-ae79-94596a39fffe\") " pod="openstack/kube-state-metrics-0" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.354198 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7dm8\" (UniqueName: \"kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8\") pod \"kube-state-metrics-0\" (UID: \"bf00fd0b-9de0-4726-ae79-94596a39fffe\") " pod="openstack/kube-state-metrics-0" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.371964 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7dm8\" (UniqueName: \"kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8\") pod \"kube-state-metrics-0\" (UID: \"bf00fd0b-9de0-4726-ae79-94596a39fffe\") " pod="openstack/kube-state-metrics-0" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.455551 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:13:21 crc kubenswrapper[4735]: I0131 15:13:21.824264 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qxj9w" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" probeResult="failure" output=< Jan 31 15:13:21 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:13:21 crc kubenswrapper[4735]: > Jan 31 15:13:22 crc kubenswrapper[4735]: I0131 15:13:22.138534 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:13:22 crc kubenswrapper[4735]: I0131 15:13:22.171849 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf00fd0b-9de0-4726-ae79-94596a39fffe","Type":"ContainerStarted","Data":"03b9ed05a18a92f98bff483a455fc2610b2b04e0fdbc1867b759cb88b2fa245c"} Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.133819 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2vhbk"] Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.135000 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.138215 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ffkst" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.138395 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.144579 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.145092 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-v8dt8"] Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.150695 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.168124 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2vhbk"] Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.181770 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v8dt8"] Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211158 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-etc-ovs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211215 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-run\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0eebe3-8b72-4599-b6f6-ba54f3836563-scripts\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211294 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-lib\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211311 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8fs\" (UniqueName: \"kubernetes.io/projected/bc0eebe3-8b72-4599-b6f6-ba54f3836563-kube-api-access-tx8fs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211343 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-log\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211361 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-log-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211378 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-combined-ca-bundle\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211397 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211412 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211480 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07524504-28f6-44cc-8630-2e736f87ff3d-scripts\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211498 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-ovn-controller-tls-certs\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.211517 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2tj\" (UniqueName: \"kubernetes.io/projected/07524504-28f6-44cc-8630-2e736f87ff3d-kube-api-access-gs2tj\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313162 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-etc-ovs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313221 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-run\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313247 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0eebe3-8b72-4599-b6f6-ba54f3836563-scripts\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313305 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-lib\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313335 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8fs\" (UniqueName: \"kubernetes.io/projected/bc0eebe3-8b72-4599-b6f6-ba54f3836563-kube-api-access-tx8fs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-log\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313401 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-log-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313445 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-combined-ca-bundle\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313471 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313493 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313531 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07524504-28f6-44cc-8630-2e736f87ff3d-scripts\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313550 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-ovn-controller-tls-certs\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.313568 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2tj\" (UniqueName: \"kubernetes.io/projected/07524504-28f6-44cc-8630-2e736f87ff3d-kube-api-access-gs2tj\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.314741 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-etc-ovs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.314920 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-run\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.316945 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc0eebe3-8b72-4599-b6f6-ba54f3836563-scripts\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.317156 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-lib\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.317487 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc0eebe3-8b72-4599-b6f6-ba54f3836563-var-log\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.317597 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-log-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.318332 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.320102 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07524504-28f6-44cc-8630-2e736f87ff3d-scripts\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.323064 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07524504-28f6-44cc-8630-2e736f87ff3d-var-run-ovn\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.324680 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-ovn-controller-tls-certs\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.340053 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2tj\" (UniqueName: \"kubernetes.io/projected/07524504-28f6-44cc-8630-2e736f87ff3d-kube-api-access-gs2tj\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.340158 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07524504-28f6-44cc-8630-2e736f87ff3d-combined-ca-bundle\") pod \"ovn-controller-2vhbk\" (UID: \"07524504-28f6-44cc-8630-2e736f87ff3d\") " pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.352952 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8fs\" (UniqueName: \"kubernetes.io/projected/bc0eebe3-8b72-4599-b6f6-ba54f3836563-kube-api-access-tx8fs\") pod \"ovn-controller-ovs-v8dt8\" (UID: \"bc0eebe3-8b72-4599-b6f6-ba54f3836563\") " pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.470547 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:24 crc kubenswrapper[4735]: I0131 15:13:24.481063 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.016701 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.019329 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.027633 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.028011 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.028239 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.028317 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.028250 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-npc8b" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.043558 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.130622 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131049 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131088 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131120 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmwf\" (UniqueName: \"kubernetes.io/projected/667db586-48c3-4b33-8e39-eb27c45d7841-kube-api-access-5jmwf\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-config\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131250 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131300 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.131394 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233224 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233316 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233368 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233523 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233559 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmwf\" (UniqueName: \"kubernetes.io/projected/667db586-48c3-4b33-8e39-eb27c45d7841-kube-api-access-5jmwf\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233628 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-config\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.233698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.234541 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.235033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.235147 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.236526 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/667db586-48c3-4b33-8e39-eb27c45d7841-config\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.238196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.239020 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.240242 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/667db586-48c3-4b33-8e39-eb27c45d7841-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.249635 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmwf\" (UniqueName: \"kubernetes.io/projected/667db586-48c3-4b33-8e39-eb27c45d7841-kube-api-access-5jmwf\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.264525 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"667db586-48c3-4b33-8e39-eb27c45d7841\") " pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.309470 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.309515 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:25 crc kubenswrapper[4735]: I0131 15:13:25.374762 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 15:13:26 crc kubenswrapper[4735]: I0131 15:13:26.367081 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ggrqd" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" probeResult="failure" output=< Jan 31 15:13:26 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:13:26 crc kubenswrapper[4735]: > Jan 31 15:13:26 crc kubenswrapper[4735]: I0131 15:13:26.873375 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:26 crc kubenswrapper[4735]: I0131 15:13:26.951628 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:27 crc kubenswrapper[4735]: I0131 15:13:27.114491 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.223542 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9bcg" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" containerID="cri-o://60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" gracePeriod=2 Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.405129 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.407993 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.411668 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.412309 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p26pl" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.412743 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.412920 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.416220 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.497517 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.497645 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.497723 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.497907 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.497990 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.498036 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.498078 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.498173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrwd\" (UniqueName: \"kubernetes.io/projected/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-kube-api-access-dzrwd\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.599986 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600096 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600131 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600222 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrwd\" (UniqueName: \"kubernetes.io/projected/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-kube-api-access-dzrwd\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600284 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.600793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.601124 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.601622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.602355 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.605912 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.605961 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.609400 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.623250 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrwd\" (UniqueName: \"kubernetes.io/projected/7ff11e61-5fe3-474b-ac0d-8a89a364de0e-kube-api-access-dzrwd\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.642389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7ff11e61-5fe3-474b-ac0d-8a89a364de0e\") " pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:28 crc kubenswrapper[4735]: I0131 15:13:28.727604 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:29 crc kubenswrapper[4735]: I0131 15:13:29.232397 4735 generic.go:334] "Generic (PLEG): container finished" podID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerID="60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" exitCode=0 Jan 31 15:13:29 crc kubenswrapper[4735]: I0131 15:13:29.232480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerDied","Data":"60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc"} Jan 31 15:13:30 crc kubenswrapper[4735]: I0131 15:13:30.826864 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:30 crc kubenswrapper[4735]: I0131 15:13:30.912587 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:31 crc kubenswrapper[4735]: I0131 15:13:31.085104 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:32 crc kubenswrapper[4735]: I0131 15:13:32.258904 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxj9w" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" containerID="cri-o://ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" gracePeriod=2 Jan 31 15:13:35 crc kubenswrapper[4735]: I0131 15:13:35.352664 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:35 crc kubenswrapper[4735]: I0131 15:13:35.399602 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:35 crc kubenswrapper[4735]: I0131 15:13:35.592013 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:36 crc kubenswrapper[4735]: I0131 15:13:36.288866 4735 generic.go:334] "Generic (PLEG): container finished" podID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerID="ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" exitCode=0 Jan 31 15:13:36 crc kubenswrapper[4735]: I0131 15:13:36.289151 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerDied","Data":"ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6"} Jan 31 15:13:36 crc kubenswrapper[4735]: E0131 15:13:36.800635 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc is running failed: container process not found" containerID="60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:36 crc kubenswrapper[4735]: E0131 15:13:36.801587 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc is running failed: container process not found" containerID="60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:36 crc kubenswrapper[4735]: E0131 15:13:36.801893 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc is running failed: container process not found" containerID="60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:36 crc kubenswrapper[4735]: E0131 15:13:36.801966 4735 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-p9bcg" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" Jan 31 15:13:37 crc kubenswrapper[4735]: I0131 15:13:37.296657 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggrqd" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" containerID="cri-o://8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" gracePeriod=2 Jan 31 15:13:37 crc kubenswrapper[4735]: I0131 15:13:37.346509 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:13:37 crc kubenswrapper[4735]: I0131 15:13:37.346599 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:13:38 crc kubenswrapper[4735]: I0131 15:13:38.310633 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerID="8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" exitCode=0 Jan 31 15:13:38 crc kubenswrapper[4735]: I0131 15:13:38.310672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerDied","Data":"8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6"} Jan 31 15:13:39 crc kubenswrapper[4735]: E0131 15:13:39.729718 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 31 15:13:39 crc kubenswrapper[4735]: E0131 15:13:39.730990 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nf6h86h56h554h577hcdh88h679h64fh54dh55ch67fh58h78h64bh547hfh65fh86hd4h85h5b7hcfh57ch64hbch649h679h78hd9h66dhd7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqt4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(bb53ef9b-e389-4e78-a677-5def022eab7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:39 crc kubenswrapper[4735]: E0131 15:13:39.734561 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="bb53ef9b-e389-4e78-a677-5def022eab7e" Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.332523 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="bb53ef9b-e389-4e78-a677-5def022eab7e" Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.762739 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6 is running failed: container process not found" containerID="ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.765698 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6 is running failed: container process not found" containerID="ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.766083 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6 is running failed: container process not found" containerID="ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.766220 4735 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qxj9w" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.900105 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.901748 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdkkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2f06bd71-0d33-43d8-9a0c-586aca801173): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:40 crc kubenswrapper[4735]: E0131 15:13:40.903258 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" Jan 31 15:13:41 crc kubenswrapper[4735]: E0131 15:13:41.337322 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.311272 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6 is running failed: container process not found" containerID="8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.312591 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6 is running failed: container process not found" containerID="8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.313221 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6 is running failed: container process not found" containerID="8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.313277 4735 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ggrqd" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.606661 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.606886 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx2dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8f2fd0fe-2906-4934-b08b-27032a482331): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.608127 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8f2fd0fe-2906-4934-b08b-27032a482331" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.638278 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.638524 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px668,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(2aad2308-9cbb-48a2-99cc-7556caf884a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.639956 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.664246 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.664409 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkjnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(5616e745-0304-4987-bc98-aaa42fc5f6ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:45 crc kubenswrapper[4735]: E0131 15:13:45.665696 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="5616e745-0304-4987-bc98-aaa42fc5f6ea" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.796170 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.805362 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.826152 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.899349 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krmpv\" (UniqueName: \"kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv\") pod \"90a481f1-ff58-488e-93f1-e8792bc8feaf\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.899735 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities\") pod \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.900899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities" (OuterVolumeSpecName: "utilities") pod "62cf59b7-3c10-4343-aab2-3dc3fa954bdd" (UID: "62cf59b7-3c10-4343-aab2-3dc3fa954bdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.900965 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities\") pod \"90a481f1-ff58-488e-93f1-e8792bc8feaf\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901015 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content\") pod \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901053 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities\") pod \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901085 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content\") pod \"90a481f1-ff58-488e-93f1-e8792bc8feaf\" (UID: \"90a481f1-ff58-488e-93f1-e8792bc8feaf\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901185 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spd48\" (UniqueName: \"kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48\") pod \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\" (UID: \"c6dbe794-0ab2-4c12-b349-8c509ae6a218\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901224 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nltcv\" (UniqueName: \"kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv\") pod \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.901277 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content\") pod \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\" (UID: \"62cf59b7-3c10-4343-aab2-3dc3fa954bdd\") " Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.902194 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities" (OuterVolumeSpecName: "utilities") pod "90a481f1-ff58-488e-93f1-e8792bc8feaf" (UID: "90a481f1-ff58-488e-93f1-e8792bc8feaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.902365 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities" (OuterVolumeSpecName: "utilities") pod "c6dbe794-0ab2-4c12-b349-8c509ae6a218" (UID: "c6dbe794-0ab2-4c12-b349-8c509ae6a218"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.904293 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.904331 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.904341 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.905828 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv" (OuterVolumeSpecName: "kube-api-access-krmpv") pod "90a481f1-ff58-488e-93f1-e8792bc8feaf" (UID: "90a481f1-ff58-488e-93f1-e8792bc8feaf"). InnerVolumeSpecName "kube-api-access-krmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.905928 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48" (OuterVolumeSpecName: "kube-api-access-spd48") pod "c6dbe794-0ab2-4c12-b349-8c509ae6a218" (UID: "c6dbe794-0ab2-4c12-b349-8c509ae6a218"). InnerVolumeSpecName "kube-api-access-spd48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.906197 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv" (OuterVolumeSpecName: "kube-api-access-nltcv") pod "62cf59b7-3c10-4343-aab2-3dc3fa954bdd" (UID: "62cf59b7-3c10-4343-aab2-3dc3fa954bdd"). InnerVolumeSpecName "kube-api-access-nltcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.958151 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62cf59b7-3c10-4343-aab2-3dc3fa954bdd" (UID: "62cf59b7-3c10-4343-aab2-3dc3fa954bdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4735]: I0131 15:13:45.973953 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90a481f1-ff58-488e-93f1-e8792bc8feaf" (UID: "90a481f1-ff58-488e-93f1-e8792bc8feaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.005615 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krmpv\" (UniqueName: \"kubernetes.io/projected/90a481f1-ff58-488e-93f1-e8792bc8feaf-kube-api-access-krmpv\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.005646 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90a481f1-ff58-488e-93f1-e8792bc8feaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.005656 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spd48\" (UniqueName: \"kubernetes.io/projected/c6dbe794-0ab2-4c12-b349-8c509ae6a218-kube-api-access-spd48\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.005666 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltcv\" (UniqueName: \"kubernetes.io/projected/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-kube-api-access-nltcv\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.005693 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cf59b7-3c10-4343-aab2-3dc3fa954bdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.026916 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6dbe794-0ab2-4c12-b349-8c509ae6a218" (UID: "c6dbe794-0ab2-4c12-b349-8c509ae6a218"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.107704 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6dbe794-0ab2-4c12-b349-8c509ae6a218-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.374790 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bcg" event={"ID":"62cf59b7-3c10-4343-aab2-3dc3fa954bdd","Type":"ContainerDied","Data":"9607c11a16f8bc778bd66a2583842d706fe849980ee26e7f5fc25114863b74a3"} Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.374841 4735 scope.go:117] "RemoveContainer" containerID="60f950a772509b0f64f7b77816b0cc0bcfa54b0ab83a62e27035fb769fa1fedc" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.374944 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bcg" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.381933 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxj9w" event={"ID":"90a481f1-ff58-488e-93f1-e8792bc8feaf","Type":"ContainerDied","Data":"0395fe1b88383e35e29a3d4b0026f304104e367d28bdfb3bab5e646f7190bc7d"} Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.381964 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxj9w" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.384635 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggrqd" event={"ID":"c6dbe794-0ab2-4c12-b349-8c509ae6a218","Type":"ContainerDied","Data":"f0407f6ac34d49bedb62a3e1799de7d742fffd79c772803b567911aa79aa8e80"} Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.384664 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggrqd" Jan 31 15:13:46 crc kubenswrapper[4735]: E0131 15:13:46.387881 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" Jan 31 15:13:46 crc kubenswrapper[4735]: E0131 15:13:46.387875 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="8f2fd0fe-2906-4934-b08b-27032a482331" Jan 31 15:13:46 crc kubenswrapper[4735]: E0131 15:13:46.388075 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="5616e745-0304-4987-bc98-aaa42fc5f6ea" Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.490664 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.501009 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggrqd"] Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.510512 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.517150 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxj9w"] Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.524326 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:46 crc kubenswrapper[4735]: I0131 15:13:46.532007 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9bcg"] Jan 31 15:13:47 crc kubenswrapper[4735]: I0131 15:13:47.556539 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" path="/var/lib/kubelet/pods/62cf59b7-3c10-4343-aab2-3dc3fa954bdd/volumes" Jan 31 15:13:47 crc kubenswrapper[4735]: I0131 15:13:47.557942 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" path="/var/lib/kubelet/pods/90a481f1-ff58-488e-93f1-e8792bc8feaf/volumes" Jan 31 15:13:47 crc kubenswrapper[4735]: I0131 15:13:47.559363 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" path="/var/lib/kubelet/pods/c6dbe794-0ab2-4c12-b349-8c509ae6a218/volumes" Jan 31 15:13:50 crc kubenswrapper[4735]: I0131 15:13:50.818593 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2vhbk"] Jan 31 15:13:50 crc kubenswrapper[4735]: I0131 15:13:50.858223 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v8dt8"] Jan 31 15:13:50 crc kubenswrapper[4735]: I0131 15:13:50.963751 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 15:13:51 crc kubenswrapper[4735]: I0131 15:13:51.038293 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.426315 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.426507 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5qgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jp8d7_openstack(d3fda1c7-1f92-49b1-b3cd-fb5848208e67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.427880 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" podUID="d3fda1c7-1f92-49b1-b3cd-fb5848208e67" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.447834 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.448015 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfdpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hd9g8_openstack(e93d7768-7f1e-467c-97df-29645555a92a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.449195 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" podUID="e93d7768-7f1e-467c-97df-29645555a92a" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.455108 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.455301 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8hcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2dzxs_openstack(34a19c2d-fb87-483b-b56c-948f1ba8f2a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.456731 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" podUID="34a19c2d-fb87-483b-b56c-948f1ba8f2a0" Jan 31 15:13:51 crc kubenswrapper[4735]: W0131 15:13:51.470637 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07524504_28f6_44cc_8630_2e736f87ff3d.slice/crio-253ae46101090069bdf2c27cc799b517d27dd5af063933b0faeb35e6e71f4062 WatchSource:0}: Error finding container 253ae46101090069bdf2c27cc799b517d27dd5af063933b0faeb35e6e71f4062: Status 404 returned error can't find the container with id 253ae46101090069bdf2c27cc799b517d27dd5af063933b0faeb35e6e71f4062 Jan 31 15:13:51 crc kubenswrapper[4735]: W0131 15:13:51.471883 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc0eebe3_8b72_4599_b6f6_ba54f3836563.slice/crio-75accc03a3a84d1e08a3caa52e32e0fec1a3babfe8fd53d0a6a37c9b8976e669 WatchSource:0}: Error finding container 75accc03a3a84d1e08a3caa52e32e0fec1a3babfe8fd53d0a6a37c9b8976e669: Status 404 returned error can't find the container with id 75accc03a3a84d1e08a3caa52e32e0fec1a3babfe8fd53d0a6a37c9b8976e669 Jan 31 15:13:51 crc kubenswrapper[4735]: W0131 15:13:51.476828 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667db586_48c3_4b33_8e39_eb27c45d7841.slice/crio-d5b85bc2c4e037c317ec1cad23086d8f96ef6304eac6b6260bb37c4f7c23befa WatchSource:0}: Error finding container d5b85bc2c4e037c317ec1cad23086d8f96ef6304eac6b6260bb37c4f7c23befa: Status 404 returned error can't find the container with id d5b85bc2c4e037c317ec1cad23086d8f96ef6304eac6b6260bb37c4f7c23befa Jan 31 15:13:51 crc kubenswrapper[4735]: W0131 15:13:51.477195 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ff11e61_5fe3_474b_ac0d_8a89a364de0e.slice/crio-d3c732deb55ee4a27f03f680928a2d226572a9ee1167e8bf3ad6d425d89c62f9 WatchSource:0}: Error finding container d3c732deb55ee4a27f03f680928a2d226572a9ee1167e8bf3ad6d425d89c62f9: Status 404 returned error can't find the container with id d3c732deb55ee4a27f03f680928a2d226572a9ee1167e8bf3ad6d425d89c62f9 Jan 31 15:13:51 crc kubenswrapper[4735]: I0131 15:13:51.484275 4735 scope.go:117] "RemoveContainer" containerID="aab76e1f019ff58ffd8d10826b431c0560c2e04856fd09f4281c298aeaaa66d9" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.489554 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.489973 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xms6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-bmmnk_openstack(c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:13:51 crc kubenswrapper[4735]: E0131 15:13:51.491689 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" podUID="c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.158118 4735 scope.go:117] "RemoveContainer" containerID="3a747406b4760ce00ea9b9e9e17461f0a9e72c906aaeb7135c410b5b5e640f56" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.171636 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.171687 4735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.171817 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7dm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(bf00fd0b-9de0-4726-ae79-94596a39fffe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.173871 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.192955 4735 scope.go:117] "RemoveContainer" containerID="ee22c73c0a4015cb8b08d55be8ab9f83bddff03c6c5b2e4b7195d0ecba2587b6" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.222945 4735 scope.go:117] "RemoveContainer" containerID="14d195ea6a0c1c4e87d4034846b34f1938f682a8a7a38f221a4713446cba0e07" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.365359 4735 scope.go:117] "RemoveContainer" containerID="bff10f910b7ceb5176578efa530bdadbdbbb0f32fd2d66540198fa6e556918b1" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.410625 4735 scope.go:117] "RemoveContainer" containerID="8eaa055151e8f0622a9a17a12b7f2c54ec88c97e134c1da095e12f5f064e02f6" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.437222 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7ff11e61-5fe3-474b-ac0d-8a89a364de0e","Type":"ContainerStarted","Data":"d3c732deb55ee4a27f03f680928a2d226572a9ee1167e8bf3ad6d425d89c62f9"} Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.440664 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk" event={"ID":"07524504-28f6-44cc-8630-2e736f87ff3d","Type":"ContainerStarted","Data":"253ae46101090069bdf2c27cc799b517d27dd5af063933b0faeb35e6e71f4062"} Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.442557 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v8dt8" event={"ID":"bc0eebe3-8b72-4599-b6f6-ba54f3836563","Type":"ContainerStarted","Data":"75accc03a3a84d1e08a3caa52e32e0fec1a3babfe8fd53d0a6a37c9b8976e669"} Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.445942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"667db586-48c3-4b33-8e39-eb27c45d7841","Type":"ContainerStarted","Data":"d5b85bc2c4e037c317ec1cad23086d8f96ef6304eac6b6260bb37c4f7c23befa"} Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.459813 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" podUID="e93d7768-7f1e-467c-97df-29645555a92a" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.459868 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" podUID="c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" Jan 31 15:13:52 crc kubenswrapper[4735]: E0131 15:13:52.459895 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.471359 4735 scope.go:117] "RemoveContainer" containerID="c9cd616bc406818f65ff998f70bddd8bc2a7ddb69233914487f3fa5676fa96f9" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.575141 4735 scope.go:117] "RemoveContainer" containerID="e5914767c2f7eee2a2beb8c2c9c12f1e5e40e44a67ee1de387463a447ef5d588" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.957520 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:52 crc kubenswrapper[4735]: I0131 15:13:52.983395 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.062512 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc\") pod \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.062619 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config\") pod \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.063148 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34a19c2d-fb87-483b-b56c-948f1ba8f2a0" (UID: "34a19c2d-fb87-483b-b56c-948f1ba8f2a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.063282 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config" (OuterVolumeSpecName: "config") pod "d3fda1c7-1f92-49b1-b3cd-fb5848208e67" (UID: "d3fda1c7-1f92-49b1-b3cd-fb5848208e67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.063370 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5qgr\" (UniqueName: \"kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr\") pod \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\" (UID: \"d3fda1c7-1f92-49b1-b3cd-fb5848208e67\") " Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.064120 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8hcm\" (UniqueName: \"kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm\") pod \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.064150 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config\") pod \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\" (UID: \"34a19c2d-fb87-483b-b56c-948f1ba8f2a0\") " Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.064540 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.064703 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.064821 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config" (OuterVolumeSpecName: "config") pod "34a19c2d-fb87-483b-b56c-948f1ba8f2a0" (UID: "34a19c2d-fb87-483b-b56c-948f1ba8f2a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.070135 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr" (OuterVolumeSpecName: "kube-api-access-t5qgr") pod "d3fda1c7-1f92-49b1-b3cd-fb5848208e67" (UID: "d3fda1c7-1f92-49b1-b3cd-fb5848208e67"). InnerVolumeSpecName "kube-api-access-t5qgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.070286 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm" (OuterVolumeSpecName: "kube-api-access-t8hcm") pod "34a19c2d-fb87-483b-b56c-948f1ba8f2a0" (UID: "34a19c2d-fb87-483b-b56c-948f1ba8f2a0"). InnerVolumeSpecName "kube-api-access-t8hcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.166107 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5qgr\" (UniqueName: \"kubernetes.io/projected/d3fda1c7-1f92-49b1-b3cd-fb5848208e67-kube-api-access-t5qgr\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.166150 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8hcm\" (UniqueName: \"kubernetes.io/projected/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-kube-api-access-t8hcm\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.166161 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a19c2d-fb87-483b-b56c-948f1ba8f2a0-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.462036 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" event={"ID":"34a19c2d-fb87-483b-b56c-948f1ba8f2a0","Type":"ContainerDied","Data":"40a4097700c87e745febc4c4eef20bb98022e8f88780fbcc003bf3863f576deb"} Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.462117 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2dzxs" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.465576 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bb53ef9b-e389-4e78-a677-5def022eab7e","Type":"ContainerStarted","Data":"43cd3903842a6d24582f387c93d619d3831288d7b280f160ad044337845e0cd9"} Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.465820 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.486726 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" event={"ID":"d3fda1c7-1f92-49b1-b3cd-fb5848208e67","Type":"ContainerDied","Data":"5dc5cf07ed1caaf507969287e3bf6cc6d6b2adf028ad3663a296d1523a757a40"} Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.486811 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jp8d7" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.514285 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.160925925 podStartE2EDuration="34.514259302s" podCreationTimestamp="2026-01-31 15:13:19 +0000 UTC" firstStartedPulling="2026-01-31 15:13:20.025026294 +0000 UTC m=+885.798355336" lastFinishedPulling="2026-01-31 15:13:52.378359671 +0000 UTC m=+918.151688713" observedRunningTime="2026-01-31 15:13:53.481233068 +0000 UTC m=+919.254562110" watchObservedRunningTime="2026-01-31 15:13:53.514259302 +0000 UTC m=+919.287588344" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.525355 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.534191 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2dzxs"] Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.646058 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a19c2d-fb87-483b-b56c-948f1ba8f2a0" path="/var/lib/kubelet/pods/34a19c2d-fb87-483b-b56c-948f1ba8f2a0/volumes" Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.647692 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:53 crc kubenswrapper[4735]: I0131 15:13:53.647719 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jp8d7"] Jan 31 15:13:55 crc kubenswrapper[4735]: I0131 15:13:55.551154 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fda1c7-1f92-49b1-b3cd-fb5848208e67" path="/var/lib/kubelet/pods/d3fda1c7-1f92-49b1-b3cd-fb5848208e67/volumes" Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.511763 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"667db586-48c3-4b33-8e39-eb27c45d7841","Type":"ContainerStarted","Data":"dc571933aebb817f186fad7570c3b15a89d8509f4cb8919e0be5de0c20498de3"} Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.514712 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7ff11e61-5fe3-474b-ac0d-8a89a364de0e","Type":"ContainerStarted","Data":"3646826142f508433ad207e1963ea4723b81433187eca1da2baa40b79fe933a4"} Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.516387 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk" event={"ID":"07524504-28f6-44cc-8630-2e736f87ff3d","Type":"ContainerStarted","Data":"211232d703520a3493cdf5ff57a778a03f808d833a8b910f9cde24bc09ee9094"} Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.517753 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2vhbk" Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.519725 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v8dt8" event={"ID":"bc0eebe3-8b72-4599-b6f6-ba54f3836563","Type":"ContainerStarted","Data":"0e7df686109015a368626a9e7122b7df72fa6c8901d3ec12194a7935aaeaa3ff"} Jan 31 15:13:56 crc kubenswrapper[4735]: I0131 15:13:56.540003 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2vhbk" podStartSLOduration=28.180329201 podStartE2EDuration="32.539981659s" podCreationTimestamp="2026-01-31 15:13:24 +0000 UTC" firstStartedPulling="2026-01-31 15:13:51.489169743 +0000 UTC m=+917.262498825" lastFinishedPulling="2026-01-31 15:13:55.848822231 +0000 UTC m=+921.622151283" observedRunningTime="2026-01-31 15:13:56.533311841 +0000 UTC m=+922.306640883" watchObservedRunningTime="2026-01-31 15:13:56.539981659 +0000 UTC m=+922.313310701" Jan 31 15:13:57 crc kubenswrapper[4735]: I0131 15:13:57.531720 4735 generic.go:334] "Generic (PLEG): container finished" podID="bc0eebe3-8b72-4599-b6f6-ba54f3836563" containerID="0e7df686109015a368626a9e7122b7df72fa6c8901d3ec12194a7935aaeaa3ff" exitCode=0 Jan 31 15:13:57 crc kubenswrapper[4735]: I0131 15:13:57.531936 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v8dt8" event={"ID":"bc0eebe3-8b72-4599-b6f6-ba54f3836563","Type":"ContainerDied","Data":"0e7df686109015a368626a9e7122b7df72fa6c8901d3ec12194a7935aaeaa3ff"} Jan 31 15:13:57 crc kubenswrapper[4735]: I0131 15:13:57.536262 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerStarted","Data":"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.546461 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"667db586-48c3-4b33-8e39-eb27c45d7841","Type":"ContainerStarted","Data":"380767fa131322d650ead25e7d9e36dc9ef942871698fd437ace4ec886298f9a"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.551060 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7ff11e61-5fe3-474b-ac0d-8a89a364de0e","Type":"ContainerStarted","Data":"d1ae469a26b33efa6b3f24422fdec3b97dad89cdb5b5e19c1c2c29ea0a04951c"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.554486 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v8dt8" event={"ID":"bc0eebe3-8b72-4599-b6f6-ba54f3836563","Type":"ContainerStarted","Data":"f473fb659417e2c87925e4f3e98d046940e642848385626ceee6acfc3886e588"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.554517 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v8dt8" event={"ID":"bc0eebe3-8b72-4599-b6f6-ba54f3836563","Type":"ContainerStarted","Data":"fa7e275c3053bbf52cf8e9773aa25a360bc632b19b9acb1e3688a0262d3ba32f"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.555073 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.555110 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.557050 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5616e745-0304-4987-bc98-aaa42fc5f6ea","Type":"ContainerStarted","Data":"944e3d6dcb99a7f6894038cd2deea41388d179f8e07b61746f5791bfd363dcca"} Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.583726 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=29.340073538 podStartE2EDuration="35.583700396s" podCreationTimestamp="2026-01-31 15:13:23 +0000 UTC" firstStartedPulling="2026-01-31 15:13:51.484564383 +0000 UTC m=+917.257893465" lastFinishedPulling="2026-01-31 15:13:57.728191281 +0000 UTC m=+923.501520323" observedRunningTime="2026-01-31 15:13:58.581201565 +0000 UTC m=+924.354530637" watchObservedRunningTime="2026-01-31 15:13:58.583700396 +0000 UTC m=+924.357029468" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.649273 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.410757546 podStartE2EDuration="31.649246569s" podCreationTimestamp="2026-01-31 15:13:27 +0000 UTC" firstStartedPulling="2026-01-31 15:13:51.484261854 +0000 UTC m=+917.257590896" lastFinishedPulling="2026-01-31 15:13:57.722750877 +0000 UTC m=+923.496079919" observedRunningTime="2026-01-31 15:13:58.642656643 +0000 UTC m=+924.415985725" watchObservedRunningTime="2026-01-31 15:13:58.649246569 +0000 UTC m=+924.422575651" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.671679 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-v8dt8" podStartSLOduration=30.311338256 podStartE2EDuration="34.671658423s" podCreationTimestamp="2026-01-31 15:13:24 +0000 UTC" firstStartedPulling="2026-01-31 15:13:51.484959994 +0000 UTC m=+917.258289046" lastFinishedPulling="2026-01-31 15:13:55.845280171 +0000 UTC m=+921.618609213" observedRunningTime="2026-01-31 15:13:58.664724847 +0000 UTC m=+924.438053939" watchObservedRunningTime="2026-01-31 15:13:58.671658423 +0000 UTC m=+924.444987465" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.727911 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:58 crc kubenswrapper[4735]: I0131 15:13:58.728135 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 15:13:59 crc kubenswrapper[4735]: I0131 15:13:59.539194 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 15:14:00 crc kubenswrapper[4735]: I0131 15:14:00.375446 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.375776 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.428828 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.491364 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531326 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531679 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531696 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531707 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531714 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531728 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531734 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531743 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531749 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531759 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531764 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531781 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531787 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="extract-utilities" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531797 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531803 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531818 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531823 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="extract-content" Jan 31 15:14:01 crc kubenswrapper[4735]: E0131 15:14:01.531833 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531839 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531970 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a481f1-ff58-488e-93f1-e8792bc8feaf" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531979 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cf59b7-3c10-4343-aab2-3dc3fa954bdd" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.531987 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6dbe794-0ab2-4c12-b349-8c509ae6a218" containerName="registry-server" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.532761 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.609256 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.675529 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.675636 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.675751 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h64z\" (UniqueName: \"kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.690081 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.777637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.777791 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.777862 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h64z\" (UniqueName: \"kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.779349 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.779552 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.784035 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.811281 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h64z\" (UniqueName: \"kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z\") pod \"dnsmasq-dns-7cb5889db5-mppk9\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.879298 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.920125 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.951405 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.980557 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-qw5r8"] Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.981959 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.985784 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.992769 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-qw5r8"] Jan 31 15:14:01 crc kubenswrapper[4735]: I0131 15:14:01.999793 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.018641 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fjc7w"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.019730 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.030220 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.038273 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fjc7w"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.083565 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.083614 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.083660 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.083744 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brr89\" (UniqueName: \"kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.172109 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-qw5r8"] Jan 31 15:14:02 crc kubenswrapper[4735]: E0131 15:14:02.174075 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-brr89 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" podUID="13159579-bb45-4bf1-917a-c9c0acac3c5e" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185325 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc\") pod \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185563 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xms6\" (UniqueName: \"kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6\") pod \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185590 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config\") pod \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\" (UID: \"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185960 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.185984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-combined-ca-bundle\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186014 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186044 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brr89\" (UniqueName: \"kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186067 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovn-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186098 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovs-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186152 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce44083a-52e6-45b6-bd3f-90ae832c54fa-config\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186199 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqn6p\" (UniqueName: \"kubernetes.io/projected/ce44083a-52e6-45b6-bd3f-90ae832c54fa-kube-api-access-xqn6p\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.186256 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.187527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.187731 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" (UID: "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.190757 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.191078 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config" (OuterVolumeSpecName: "config") pod "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" (UID: "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.196895 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6" (OuterVolumeSpecName: "kube-api-access-5xms6") pod "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" (UID: "c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9"). InnerVolumeSpecName "kube-api-access-5xms6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.197386 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.218713 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.219861 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.226531 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jb2fk" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.227601 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.227851 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.235825 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.237650 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brr89\" (UniqueName: \"kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89\") pod \"dnsmasq-dns-74f6f696b9-qw5r8\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.240179 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.262594 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.264649 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.272159 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.282037 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.287835 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288300 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-combined-ca-bundle\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288344 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovn-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288372 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovs-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce44083a-52e6-45b6-bd3f-90ae832c54fa-config\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288463 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqn6p\" (UniqueName: \"kubernetes.io/projected/ce44083a-52e6-45b6-bd3f-90ae832c54fa-kube-api-access-xqn6p\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288556 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xms6\" (UniqueName: \"kubernetes.io/projected/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-kube-api-access-5xms6\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288567 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.288578 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.289018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovs-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.289094 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce44083a-52e6-45b6-bd3f-90ae832c54fa-ovn-rundir\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.289540 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce44083a-52e6-45b6-bd3f-90ae832c54fa-config\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.290545 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.297244 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce44083a-52e6-45b6-bd3f-90ae832c54fa-combined-ca-bundle\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.307203 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqn6p\" (UniqueName: \"kubernetes.io/projected/ce44083a-52e6-45b6-bd3f-90ae832c54fa-kube-api-access-xqn6p\") pod \"ovn-controller-metrics-fjc7w\" (UID: \"ce44083a-52e6-45b6-bd3f-90ae832c54fa\") " pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.335218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fjc7w" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389582 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzzw\" (UniqueName: \"kubernetes.io/projected/bbe4f564-44b1-441d-aed3-b08ad06141c6-kube-api-access-lqzzw\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389703 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-config\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389787 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389818 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389900 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389954 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.389984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-scripts\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.390026 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.390044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdq8q\" (UniqueName: \"kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.390070 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.464585 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492344 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492394 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzzw\" (UniqueName: \"kubernetes.io/projected/bbe4f564-44b1-441d-aed3-b08ad06141c6-kube-api-access-lqzzw\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492418 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-config\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492453 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492484 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492557 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492588 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492643 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-scripts\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492660 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.492676 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdq8q\" (UniqueName: \"kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.493752 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.494462 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.494707 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.494735 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.495332 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.495412 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-config\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.495703 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbe4f564-44b1-441d-aed3-b08ad06141c6-scripts\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.499169 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.499191 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.499698 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4f564-44b1-441d-aed3-b08ad06141c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.508677 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzzw\" (UniqueName: \"kubernetes.io/projected/bbe4f564-44b1-441d-aed3-b08ad06141c6-kube-api-access-lqzzw\") pod \"ovn-northd-0\" (UID: \"bbe4f564-44b1-441d-aed3-b08ad06141c6\") " pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.508954 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdq8q\" (UniqueName: \"kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q\") pod \"dnsmasq-dns-698758b865-7vmqt\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.558165 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.585854 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.586038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.604002 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfdpf\" (UniqueName: \"kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf\") pod \"e93d7768-7f1e-467c-97df-29645555a92a\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.604098 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc\") pod \"e93d7768-7f1e-467c-97df-29645555a92a\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.604129 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config\") pod \"e93d7768-7f1e-467c-97df-29645555a92a\" (UID: \"e93d7768-7f1e-467c-97df-29645555a92a\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.604725 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config" (OuterVolumeSpecName: "config") pod "e93d7768-7f1e-467c-97df-29645555a92a" (UID: "e93d7768-7f1e-467c-97df-29645555a92a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.604745 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e93d7768-7f1e-467c-97df-29645555a92a" (UID: "e93d7768-7f1e-467c-97df-29645555a92a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: W0131 15:14:02.621606 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdb13875_9a6e_4ee1_990c_7546ceb5ebd8.slice/crio-6e9e4e78f49d3769503768f5a122e77f94f3989212547446841edd0de49f0e0c WatchSource:0}: Error finding container 6e9e4e78f49d3769503768f5a122e77f94f3989212547446841edd0de49f0e0c: Status 404 returned error can't find the container with id 6e9e4e78f49d3769503768f5a122e77f94f3989212547446841edd0de49f0e0c Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.636641 4735 generic.go:334] "Generic (PLEG): container finished" podID="5616e745-0304-4987-bc98-aaa42fc5f6ea" containerID="944e3d6dcb99a7f6894038cd2deea41388d179f8e07b61746f5791bfd363dcca" exitCode=0 Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.636745 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5616e745-0304-4987-bc98-aaa42fc5f6ea","Type":"ContainerDied","Data":"944e3d6dcb99a7f6894038cd2deea41388d179f8e07b61746f5791bfd363dcca"} Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.642188 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" event={"ID":"c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9","Type":"ContainerDied","Data":"7b9ef1d827cbccbc0be68c630334eb9d0c9b76565475b2bf5a9649e96445694f"} Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.642293 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bmmnk" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.656841 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.656847 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hd9g8" event={"ID":"e93d7768-7f1e-467c-97df-29645555a92a","Type":"ContainerDied","Data":"d29dbad05ac3dd77b6d0d9bb05639a19794be197a007034fa4f92a215df54b78"} Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.658798 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f2fd0fe-2906-4934-b08b-27032a482331","Type":"ContainerStarted","Data":"6aee00722e656944d06f8e5dba019e5ee5d9e0a951356eea0cdf68b55669f145"} Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.659195 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.666064 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.673915 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.678605 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.679994 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.680897 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.681029 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rd6jn" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.685131 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.701255 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf" (OuterVolumeSpecName: "kube-api-access-xfdpf") pod "e93d7768-7f1e-467c-97df-29645555a92a" (UID: "e93d7768-7f1e-467c-97df-29645555a92a"). InnerVolumeSpecName "kube-api-access-xfdpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.706218 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfdpf\" (UniqueName: \"kubernetes.io/projected/e93d7768-7f1e-467c-97df-29645555a92a-kube-api-access-xfdpf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.706248 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.706258 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d7768-7f1e-467c-97df-29645555a92a-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.766923 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.808591 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-lock\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.808684 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.808764 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-cache\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.808876 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp77h\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-kube-api-access-sp77h\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.809110 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.809161 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.859342 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.870556 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bmmnk"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.887708 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fjc7w"] Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.910772 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc\") pod \"13159579-bb45-4bf1-917a-c9c0acac3c5e\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.910802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config\") pod \"13159579-bb45-4bf1-917a-c9c0acac3c5e\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.910836 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb\") pod \"13159579-bb45-4bf1-917a-c9c0acac3c5e\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.910912 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brr89\" (UniqueName: \"kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89\") pod \"13159579-bb45-4bf1-917a-c9c0acac3c5e\" (UID: \"13159579-bb45-4bf1-917a-c9c0acac3c5e\") " Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911126 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911163 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-lock\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911223 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911247 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-cache\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.911273 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp77h\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-kube-api-access-sp77h\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.916549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-lock\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: E0131 15:14:02.916575 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:02 crc kubenswrapper[4735]: E0131 15:14:02.916601 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:02 crc kubenswrapper[4735]: E0131 15:14:02.916713 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:03.4166935 +0000 UTC m=+929.190022532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.916852 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.916853 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13159579-bb45-4bf1-917a-c9c0acac3c5e" (UID: "13159579-bb45-4bf1-917a-c9c0acac3c5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.917143 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13159579-bb45-4bf1-917a-c9c0acac3c5e" (UID: "13159579-bb45-4bf1-917a-c9c0acac3c5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.917257 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-cache\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.917763 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config" (OuterVolumeSpecName: "config") pod "13159579-bb45-4bf1-917a-c9c0acac3c5e" (UID: "13159579-bb45-4bf1-917a-c9c0acac3c5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.926920 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.932166 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89" (OuterVolumeSpecName: "kube-api-access-brr89") pod "13159579-bb45-4bf1-917a-c9c0acac3c5e" (UID: "13159579-bb45-4bf1-917a-c9c0acac3c5e"). InnerVolumeSpecName "kube-api-access-brr89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.932188 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp77h\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-kube-api-access-sp77h\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:02 crc kubenswrapper[4735]: I0131 15:14:02.949299 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.026158 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.026200 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.026210 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13159579-bb45-4bf1-917a-c9c0acac3c5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.026222 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brr89\" (UniqueName: \"kubernetes.io/projected/13159579-bb45-4bf1-917a-c9c0acac3c5e-kube-api-access-brr89\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.030240 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.038124 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hd9g8"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.088151 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:14:03 crc kubenswrapper[4735]: W0131 15:14:03.099825 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod301f815b_72e0_4b50_8f46_e1b7de77b8fe.slice/crio-d07d324556573b3112afee290e2b5e9467a88ee5733aa47b9db3f212c3300480 WatchSource:0}: Error finding container d07d324556573b3112afee290e2b5e9467a88ee5733aa47b9db3f212c3300480: Status 404 returned error can't find the container with id d07d324556573b3112afee290e2b5e9467a88ee5733aa47b9db3f212c3300480 Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.220223 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.447023 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:03 crc kubenswrapper[4735]: E0131 15:14:03.447237 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:03 crc kubenswrapper[4735]: E0131 15:14:03.447269 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:03 crc kubenswrapper[4735]: E0131 15:14:03.447328 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:04.447311311 +0000 UTC m=+930.220640353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.555350 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9" path="/var/lib/kubelet/pods/c5a852a2-4bea-45d1-95d1-b7a82fe0f0a9/volumes" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.555730 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93d7768-7f1e-467c-97df-29645555a92a" path="/var/lib/kubelet/pods/e93d7768-7f1e-467c-97df-29645555a92a/volumes" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.556136 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.557989 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.572506 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.650785 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x75s\" (UniqueName: \"kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.651126 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.651270 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.665391 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fjc7w" event={"ID":"ce44083a-52e6-45b6-bd3f-90ae832c54fa","Type":"ContainerStarted","Data":"57598098abe22fb904cb8da2cd0cc846758c43d513ffe0e442937ab806dc092f"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.665451 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fjc7w" event={"ID":"ce44083a-52e6-45b6-bd3f-90ae832c54fa","Type":"ContainerStarted","Data":"0c527f3a45ff10471f56796f062e89e4d680832f59b79a9ec23c610dc7ca4dae"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.668570 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5616e745-0304-4987-bc98-aaa42fc5f6ea","Type":"ContainerStarted","Data":"48554ec780cad242e69fc5a9bc0f3cfc5483efef77c3407764d35289dc18d293"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.670878 4735 generic.go:334] "Generic (PLEG): container finished" podID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerID="b42c22abca7dd71a03a01cd5ddd3271046e87877be669e305489e9671259099c" exitCode=0 Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.670925 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7vmqt" event={"ID":"301f815b-72e0-4b50-8f46-e1b7de77b8fe","Type":"ContainerDied","Data":"b42c22abca7dd71a03a01cd5ddd3271046e87877be669e305489e9671259099c"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.670960 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7vmqt" event={"ID":"301f815b-72e0-4b50-8f46-e1b7de77b8fe","Type":"ContainerStarted","Data":"d07d324556573b3112afee290e2b5e9467a88ee5733aa47b9db3f212c3300480"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.672716 4735 generic.go:334] "Generic (PLEG): container finished" podID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerID="235fd3ed871321abbf3bbe6afaab4a14ab1560f2e9dab609d9dad825e08b0d6f" exitCode=0 Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.672781 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" event={"ID":"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8","Type":"ContainerDied","Data":"235fd3ed871321abbf3bbe6afaab4a14ab1560f2e9dab609d9dad825e08b0d6f"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.672815 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" event={"ID":"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8","Type":"ContainerStarted","Data":"6e9e4e78f49d3769503768f5a122e77f94f3989212547446841edd0de49f0e0c"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.674157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerStarted","Data":"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.675272 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bbe4f564-44b1-441d-aed3-b08ad06141c6","Type":"ContainerStarted","Data":"4f22d2c134efe4c789779d132ba45301f331adbf5c434fc5fd92df3787f15801"} Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.675318 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-qw5r8" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.747809 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fjc7w" podStartSLOduration=2.747790755 podStartE2EDuration="2.747790755s" podCreationTimestamp="2026-01-31 15:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:03.695343112 +0000 UTC m=+929.468672174" watchObservedRunningTime="2026-01-31 15:14:03.747790755 +0000 UTC m=+929.521119787" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.753698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x75s\" (UniqueName: \"kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.754035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.754497 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.756639 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.757465 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.781032 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x75s\" (UniqueName: \"kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s\") pod \"redhat-marketplace-kgw44\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.830172 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-qw5r8"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.835474 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-qw5r8"] Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.854760 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.629609218 podStartE2EDuration="46.854740208s" podCreationTimestamp="2026-01-31 15:13:17 +0000 UTC" firstStartedPulling="2026-01-31 15:13:19.767016893 +0000 UTC m=+885.540345935" lastFinishedPulling="2026-01-31 15:13:57.992147883 +0000 UTC m=+923.765476925" observedRunningTime="2026-01-31 15:14:03.846234548 +0000 UTC m=+929.619563600" watchObservedRunningTime="2026-01-31 15:14:03.854740208 +0000 UTC m=+929.628069250" Jan 31 15:14:03 crc kubenswrapper[4735]: I0131 15:14:03.932741 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.331210 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:04 crc kubenswrapper[4735]: W0131 15:14:04.345706 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod618f0aea_b372_491a_a34c_25d808b450dd.slice/crio-4db14b36107a2947e4f3a3c9b1a14344ce934cc6723a13fb06d40c2c654a037a WatchSource:0}: Error finding container 4db14b36107a2947e4f3a3c9b1a14344ce934cc6723a13fb06d40c2c654a037a: Status 404 returned error can't find the container with id 4db14b36107a2947e4f3a3c9b1a14344ce934cc6723a13fb06d40c2c654a037a Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.467366 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:04 crc kubenswrapper[4735]: E0131 15:14:04.467574 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:04 crc kubenswrapper[4735]: E0131 15:14:04.467600 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:04 crc kubenswrapper[4735]: E0131 15:14:04.467660 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:06.467640425 +0000 UTC m=+932.240969467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.682356 4735 generic.go:334] "Generic (PLEG): container finished" podID="618f0aea-b372-491a-a34c-25d808b450dd" containerID="b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877" exitCode=0 Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.682833 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerDied","Data":"b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877"} Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.682916 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerStarted","Data":"4db14b36107a2947e4f3a3c9b1a14344ce934cc6723a13fb06d40c2c654a037a"} Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.684221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7vmqt" event={"ID":"301f815b-72e0-4b50-8f46-e1b7de77b8fe","Type":"ContainerStarted","Data":"76a6afb5a810dc257663d7a8b0bece28684555e824d7c7334b3afccba6877d37"} Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.685119 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.686729 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" event={"ID":"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8","Type":"ContainerStarted","Data":"8e03dcb5e85390c0ba7d840e691729eba9c81d5d05d7ff2b35f3f163711e4192"} Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.686791 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.728946 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" podStartSLOduration=3.23435373 podStartE2EDuration="3.728929112s" podCreationTimestamp="2026-01-31 15:14:01 +0000 UTC" firstStartedPulling="2026-01-31 15:14:02.623606755 +0000 UTC m=+928.396935797" lastFinishedPulling="2026-01-31 15:14:03.118182137 +0000 UTC m=+928.891511179" observedRunningTime="2026-01-31 15:14:04.724669491 +0000 UTC m=+930.497998553" watchObservedRunningTime="2026-01-31 15:14:04.728929112 +0000 UTC m=+930.502258154" Jan 31 15:14:04 crc kubenswrapper[4735]: I0131 15:14:04.748091 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7vmqt" podStartSLOduration=2.747850977 podStartE2EDuration="2.747850977s" podCreationTimestamp="2026-01-31 15:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:04.745964283 +0000 UTC m=+930.519293345" watchObservedRunningTime="2026-01-31 15:14:04.747850977 +0000 UTC m=+930.521180019" Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.568487 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13159579-bb45-4bf1-917a-c9c0acac3c5e" path="/var/lib/kubelet/pods/13159579-bb45-4bf1-917a-c9c0acac3c5e/volumes" Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.697610 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bbe4f564-44b1-441d-aed3-b08ad06141c6","Type":"ContainerStarted","Data":"c7fabe25d17034c77c8fa4e0f2a0aa227b8e94ed739cf65d44169f27864226c8"} Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.697660 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bbe4f564-44b1-441d-aed3-b08ad06141c6","Type":"ContainerStarted","Data":"744f5c6c77424d44fe2d27ca3122e9d90bf33ac4a24897f4d4c44796b573efb1"} Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.697697 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.700859 4735 generic.go:334] "Generic (PLEG): container finished" podID="618f0aea-b372-491a-a34c-25d808b450dd" containerID="dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a" exitCode=0 Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.700910 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerDied","Data":"dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a"} Jan 31 15:14:05 crc kubenswrapper[4735]: I0131 15:14:05.738728 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.162030927 podStartE2EDuration="3.738702229s" podCreationTimestamp="2026-01-31 15:14:02 +0000 UTC" firstStartedPulling="2026-01-31 15:14:03.232079546 +0000 UTC m=+929.005408588" lastFinishedPulling="2026-01-31 15:14:04.808750848 +0000 UTC m=+930.582079890" observedRunningTime="2026-01-31 15:14:05.722053858 +0000 UTC m=+931.495382950" watchObservedRunningTime="2026-01-31 15:14:05.738702229 +0000 UTC m=+931.512031271" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.502191 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:06 crc kubenswrapper[4735]: E0131 15:14:06.502450 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:06 crc kubenswrapper[4735]: E0131 15:14:06.502485 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:06 crc kubenswrapper[4735]: E0131 15:14:06.502558 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:10.502538023 +0000 UTC m=+936.275867085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.605427 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6np6m"] Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.606964 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.608922 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.609187 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.615446 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.627095 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6np6m"] Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705041 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705101 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705135 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6c95\" (UniqueName: \"kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705220 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.705312 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.709184 4735 generic.go:334] "Generic (PLEG): container finished" podID="8f2fd0fe-2906-4934-b08b-27032a482331" containerID="6aee00722e656944d06f8e5dba019e5ee5d9e0a951356eea0cdf68b55669f145" exitCode=0 Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.709291 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f2fd0fe-2906-4934-b08b-27032a482331","Type":"ContainerDied","Data":"6aee00722e656944d06f8e5dba019e5ee5d9e0a951356eea0cdf68b55669f145"} Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.712062 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerStarted","Data":"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b"} Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.769795 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kgw44" podStartSLOduration=2.419265687 podStartE2EDuration="3.769773617s" podCreationTimestamp="2026-01-31 15:14:03 +0000 UTC" firstStartedPulling="2026-01-31 15:14:04.7465663 +0000 UTC m=+930.519895342" lastFinishedPulling="2026-01-31 15:14:06.09707419 +0000 UTC m=+931.870403272" observedRunningTime="2026-01-31 15:14:06.750845262 +0000 UTC m=+932.524174304" watchObservedRunningTime="2026-01-31 15:14:06.769773617 +0000 UTC m=+932.543102649" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807010 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807093 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6c95\" (UniqueName: \"kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807147 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807169 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807265 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.807344 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.808766 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.809529 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.809587 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.812260 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.812509 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.812694 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.823505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6c95\" (UniqueName: \"kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95\") pod \"swift-ring-rebalance-6np6m\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:06 crc kubenswrapper[4735]: I0131 15:14:06.925522 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.345775 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.345828 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.596783 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6np6m"] Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.719245 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6np6m" event={"ID":"83dc2b98-a7e1-4654-95cf-fd37532fa571","Type":"ContainerStarted","Data":"0e61f45d0ef59c90cc8103e2726f908240d1cc9a86f7ab3fbc3f29e1ed7e5a3f"} Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.722015 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8f2fd0fe-2906-4934-b08b-27032a482331","Type":"ContainerStarted","Data":"bbb09c1279c14c2cd27b6c4596b2de7bc1e42bbfc77db718933526e71cded1d9"} Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.724977 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf00fd0b-9de0-4726-ae79-94596a39fffe","Type":"ContainerStarted","Data":"323fb601661e4add284c619f2e09a68cb2a2c6496ad7bba5a5e3aa936d97a678"} Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.726024 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.758175 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371985.096619 podStartE2EDuration="51.758156898s" podCreationTimestamp="2026-01-31 15:13:16 +0000 UTC" firstStartedPulling="2026-01-31 15:13:18.139593483 +0000 UTC m=+883.912922526" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:07.754234158 +0000 UTC m=+933.527563200" watchObservedRunningTime="2026-01-31 15:14:07.758156898 +0000 UTC m=+933.531485940" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.773681 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.773736 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 15:14:07 crc kubenswrapper[4735]: I0131 15:14:07.779433 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.819616609 podStartE2EDuration="46.779406519s" podCreationTimestamp="2026-01-31 15:13:21 +0000 UTC" firstStartedPulling="2026-01-31 15:13:22.159101187 +0000 UTC m=+887.932430229" lastFinishedPulling="2026-01-31 15:14:07.118891097 +0000 UTC m=+932.892220139" observedRunningTime="2026-01-31 15:14:07.769617592 +0000 UTC m=+933.542946644" watchObservedRunningTime="2026-01-31 15:14:07.779406519 +0000 UTC m=+933.552735571" Jan 31 15:14:07 crc kubenswrapper[4735]: E0131 15:14:07.898782 4735 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:34596->38.102.83.241:38007: write tcp 38.102.83.241:34596->38.102.83.241:38007: write: broken pipe Jan 31 15:14:09 crc kubenswrapper[4735]: I0131 15:14:09.043337 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 15:14:09 crc kubenswrapper[4735]: I0131 15:14:09.043680 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 15:14:10 crc kubenswrapper[4735]: I0131 15:14:10.595053 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:10 crc kubenswrapper[4735]: E0131 15:14:10.595224 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:10 crc kubenswrapper[4735]: E0131 15:14:10.595373 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:10 crc kubenswrapper[4735]: E0131 15:14:10.595416 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:18.595403659 +0000 UTC m=+944.368732701 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:11 crc kubenswrapper[4735]: I0131 15:14:11.922176 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:12 crc kubenswrapper[4735]: I0131 15:14:12.588665 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:12 crc kubenswrapper[4735]: I0131 15:14:12.667759 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:12 crc kubenswrapper[4735]: I0131 15:14:12.761253 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="dnsmasq-dns" containerID="cri-o://8e03dcb5e85390c0ba7d840e691729eba9c81d5d05d7ff2b35f3f163711e4192" gracePeriod=10 Jan 31 15:14:12 crc kubenswrapper[4735]: I0131 15:14:12.797154 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 15:14:12 crc kubenswrapper[4735]: I0131 15:14:12.878731 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5616e745-0304-4987-bc98-aaa42fc5f6ea" containerName="galera" probeResult="failure" output=< Jan 31 15:14:12 crc kubenswrapper[4735]: wsrep_local_state_comment (Joined) differs from Synced Jan 31 15:14:12 crc kubenswrapper[4735]: > Jan 31 15:14:13 crc kubenswrapper[4735]: I0131 15:14:13.774846 4735 generic.go:334] "Generic (PLEG): container finished" podID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerID="8e03dcb5e85390c0ba7d840e691729eba9c81d5d05d7ff2b35f3f163711e4192" exitCode=0 Jan 31 15:14:13 crc kubenswrapper[4735]: I0131 15:14:13.774942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" event={"ID":"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8","Type":"ContainerDied","Data":"8e03dcb5e85390c0ba7d840e691729eba9c81d5d05d7ff2b35f3f163711e4192"} Jan 31 15:14:13 crc kubenswrapper[4735]: I0131 15:14:13.933354 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:13 crc kubenswrapper[4735]: I0131 15:14:13.933415 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:14 crc kubenswrapper[4735]: I0131 15:14:14.021769 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:14 crc kubenswrapper[4735]: I0131 15:14:14.849239 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:14 crc kubenswrapper[4735]: I0131 15:14:14.910875 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:15 crc kubenswrapper[4735]: I0131 15:14:15.312485 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 15:14:15 crc kubenswrapper[4735]: I0131 15:14:15.417559 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.463874 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pm82k"] Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.466074 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.468963 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.474517 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pm82k"] Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.613658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwb4b\" (UniqueName: \"kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.613913 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.715349 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.715781 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwb4b\" (UniqueName: \"kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.716900 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.733882 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwb4b\" (UniqueName: \"kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b\") pod \"root-account-create-update-pm82k\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.784607 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:16 crc kubenswrapper[4735]: I0131 15:14:16.807053 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kgw44" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="registry-server" containerID="cri-o://9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b" gracePeriod=2 Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.016975 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.121967 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config\") pod \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.122019 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc\") pod \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.122054 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h64z\" (UniqueName: \"kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z\") pod \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\" (UID: \"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.126196 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z" (OuterVolumeSpecName: "kube-api-access-5h64z") pod "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" (UID: "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8"). InnerVolumeSpecName "kube-api-access-5h64z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.163119 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config" (OuterVolumeSpecName: "config") pod "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" (UID: "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.165032 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" (UID: "bdb13875-9a6e-4ee1-990c-7546ceb5ebd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.224891 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.225530 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.225616 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h64z\" (UniqueName: \"kubernetes.io/projected/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8-kube-api-access-5h64z\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.261006 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.304816 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pm82k"] Jan 31 15:14:17 crc kubenswrapper[4735]: W0131 15:14:17.309478 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c2dce49_5dc7_4fa6_b0ef_9462616eb57c.slice/crio-b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec WatchSource:0}: Error finding container b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec: Status 404 returned error can't find the container with id b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.428514 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x75s\" (UniqueName: \"kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s\") pod \"618f0aea-b372-491a-a34c-25d808b450dd\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.428668 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content\") pod \"618f0aea-b372-491a-a34c-25d808b450dd\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.428734 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities\") pod \"618f0aea-b372-491a-a34c-25d808b450dd\" (UID: \"618f0aea-b372-491a-a34c-25d808b450dd\") " Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.432015 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities" (OuterVolumeSpecName: "utilities") pod "618f0aea-b372-491a-a34c-25d808b450dd" (UID: "618f0aea-b372-491a-a34c-25d808b450dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.435314 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s" (OuterVolumeSpecName: "kube-api-access-6x75s") pod "618f0aea-b372-491a-a34c-25d808b450dd" (UID: "618f0aea-b372-491a-a34c-25d808b450dd"). InnerVolumeSpecName "kube-api-access-6x75s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.452229 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "618f0aea-b372-491a-a34c-25d808b450dd" (UID: "618f0aea-b372-491a-a34c-25d808b450dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.531512 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.531556 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618f0aea-b372-491a-a34c-25d808b450dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.531572 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x75s\" (UniqueName: \"kubernetes.io/projected/618f0aea-b372-491a-a34c-25d808b450dd-kube-api-access-6x75s\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.820748 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6np6m" event={"ID":"83dc2b98-a7e1-4654-95cf-fd37532fa571","Type":"ContainerStarted","Data":"cab58b39179bf36bfd44d467a84637c109f7614ec1f170814dc017c2cb8085d1"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.823918 4735 generic.go:334] "Generic (PLEG): container finished" podID="8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" containerID="d0b6cf35090bc799300fab5aaa8952738e3d3bd78fe05a1369976c3a400378f8" exitCode=0 Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.824084 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82k" event={"ID":"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c","Type":"ContainerDied","Data":"d0b6cf35090bc799300fab5aaa8952738e3d3bd78fe05a1369976c3a400378f8"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.824161 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82k" event={"ID":"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c","Type":"ContainerStarted","Data":"b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.828175 4735 generic.go:334] "Generic (PLEG): container finished" podID="618f0aea-b372-491a-a34c-25d808b450dd" containerID="9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b" exitCode=0 Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.828267 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgw44" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.828280 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerDied","Data":"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.828718 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgw44" event={"ID":"618f0aea-b372-491a-a34c-25d808b450dd","Type":"ContainerDied","Data":"4db14b36107a2947e4f3a3c9b1a14344ce934cc6723a13fb06d40c2c654a037a"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.828744 4735 scope.go:117] "RemoveContainer" containerID="9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.839041 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" event={"ID":"bdb13875-9a6e-4ee1-990c-7546ceb5ebd8","Type":"ContainerDied","Data":"6e9e4e78f49d3769503768f5a122e77f94f3989212547446841edd0de49f0e0c"} Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.839137 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.859961 4735 scope.go:117] "RemoveContainer" containerID="dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.860855 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6np6m" podStartSLOduration=2.603992631 podStartE2EDuration="11.860834844s" podCreationTimestamp="2026-01-31 15:14:06 +0000 UTC" firstStartedPulling="2026-01-31 15:14:07.603421304 +0000 UTC m=+933.376750346" lastFinishedPulling="2026-01-31 15:14:16.860263497 +0000 UTC m=+942.633592559" observedRunningTime="2026-01-31 15:14:17.842786274 +0000 UTC m=+943.616115326" watchObservedRunningTime="2026-01-31 15:14:17.860834844 +0000 UTC m=+943.634163906" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.862949 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.869463 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgw44"] Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.883518 4735 scope.go:117] "RemoveContainer" containerID="b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.895627 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.905574 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mppk9"] Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.905969 4735 scope.go:117] "RemoveContainer" containerID="9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b" Jan 31 15:14:17 crc kubenswrapper[4735]: E0131 15:14:17.906390 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b\": container with ID starting with 9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b not found: ID does not exist" containerID="9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.906437 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b"} err="failed to get container status \"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b\": rpc error: code = NotFound desc = could not find container \"9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b\": container with ID starting with 9eaa8b13639e0a0d3505c842027c8e34c5fafd872e6d00f281636cd1895b969b not found: ID does not exist" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.906458 4735 scope.go:117] "RemoveContainer" containerID="dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a" Jan 31 15:14:17 crc kubenswrapper[4735]: E0131 15:14:17.906818 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a\": container with ID starting with dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a not found: ID does not exist" containerID="dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.906837 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a"} err="failed to get container status \"dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a\": rpc error: code = NotFound desc = could not find container \"dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a\": container with ID starting with dc0104b76c747ee08529e98bdb1b66107485d49e667e869da424e2860b2f2b5a not found: ID does not exist" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.906850 4735 scope.go:117] "RemoveContainer" containerID="b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877" Jan 31 15:14:17 crc kubenswrapper[4735]: E0131 15:14:17.907025 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877\": container with ID starting with b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877 not found: ID does not exist" containerID="b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.907047 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877"} err="failed to get container status \"b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877\": rpc error: code = NotFound desc = could not find container \"b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877\": container with ID starting with b95d2617315e9cb0cf75a7bd8bb3c9566874bcaa38e4d15ab7ac4db3bb0ab877 not found: ID does not exist" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.907058 4735 scope.go:117] "RemoveContainer" containerID="8e03dcb5e85390c0ba7d840e691729eba9c81d5d05d7ff2b35f3f163711e4192" Jan 31 15:14:17 crc kubenswrapper[4735]: I0131 15:14:17.929130 4735 scope.go:117] "RemoveContainer" containerID="235fd3ed871321abbf3bbe6afaab4a14ab1560f2e9dab609d9dad825e08b0d6f" Jan 31 15:14:18 crc kubenswrapper[4735]: I0131 15:14:18.650627 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:18 crc kubenswrapper[4735]: E0131 15:14:18.650985 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:18 crc kubenswrapper[4735]: E0131 15:14:18.651035 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:18 crc kubenswrapper[4735]: E0131 15:14:18.651156 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift podName:5ca8c406-9ce6-427d-94ab-293bb0cb4c86 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:34.651114444 +0000 UTC m=+960.424443526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift") pod "swift-storage-0" (UID: "5ca8c406-9ce6-427d-94ab-293bb0cb4c86") : configmap "swift-ring-files" not found Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126123 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g9wxj"] Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.126554 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="dnsmasq-dns" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126575 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="dnsmasq-dns" Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.126618 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="init" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126627 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="init" Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.126665 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="extract-utilities" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126674 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="extract-utilities" Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.126687 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="registry-server" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126695 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="registry-server" Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.126725 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="extract-content" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126733 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="extract-content" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126921 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="618f0aea-b372-491a-a34c-25d808b450dd" containerName="registry-server" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.126946 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="dnsmasq-dns" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.127627 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.149720 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g9wxj"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.173974 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.222402 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2bbb-account-create-update-tjlxh"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.223605 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.227486 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2bbb-account-create-update-tjlxh"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.227998 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.264543 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74928\" (UniqueName: \"kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.264642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.287047 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.365769 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts\") pod \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.365848 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwb4b\" (UniqueName: \"kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b\") pod \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\" (UID: \"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c\") " Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366220 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" (UID: "8c2dce49-5dc7-4fa6-b0ef-9462616eb57c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366201 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366378 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366740 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366850 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74928\" (UniqueName: \"kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366993 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.366994 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.377599 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b" (OuterVolumeSpecName: "kube-api-access-rwb4b") pod "8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" (UID: "8c2dce49-5dc7-4fa6-b0ef-9462616eb57c"). InnerVolumeSpecName "kube-api-access-rwb4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.388028 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74928\" (UniqueName: \"kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928\") pod \"keystone-db-create-g9wxj\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.415165 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9jmx2"] Jan 31 15:14:19 crc kubenswrapper[4735]: E0131 15:14:19.424193 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" containerName="mariadb-account-create-update" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.424239 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" containerName="mariadb-account-create-update" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.424845 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" containerName="mariadb-account-create-update" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.425666 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.445133 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.463947 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9jmx2"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.468449 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.468564 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.468624 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwb4b\" (UniqueName: \"kubernetes.io/projected/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c-kube-api-access-rwb4b\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.469352 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.501865 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9\") pod \"keystone-2bbb-account-create-update-tjlxh\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.516015 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54d7-account-create-update-dqzl9"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.516963 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.518880 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.530587 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54d7-account-create-update-dqzl9"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.550686 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="618f0aea-b372-491a-a34c-25d808b450dd" path="/var/lib/kubelet/pods/618f0aea-b372-491a-a34c-25d808b450dd/volumes" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.551559 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" path="/var/lib/kubelet/pods/bdb13875-9a6e-4ee1-990c-7546ceb5ebd8/volumes" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.580524 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.580648 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9x2\" (UniqueName: \"kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.600461 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.685438 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.685612 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.685691 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k769t\" (UniqueName: \"kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.685769 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9x2\" (UniqueName: \"kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.687392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.709192 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9x2\" (UniqueName: \"kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2\") pod \"placement-db-create-9jmx2\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.732171 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vrsn9"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.733272 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.745093 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vrsn9"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.766110 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.787312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k769t\" (UniqueName: \"kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.787475 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.788144 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.802607 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k769t\" (UniqueName: \"kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t\") pod \"placement-54d7-account-create-update-dqzl9\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.837552 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7d1b-account-create-update-6rqlg"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.839370 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.841294 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.843259 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7d1b-account-create-update-6rqlg"] Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.868259 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82k" event={"ID":"8c2dce49-5dc7-4fa6-b0ef-9462616eb57c","Type":"ContainerDied","Data":"b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec"} Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.868299 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b511b648092c2972cf6479bddf17c7e118b0f85596f41618d5855551c9c16bec" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.868344 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82k" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.889565 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.889915 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6njv\" (UniqueName: \"kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.893200 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.931716 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g9wxj"] Jan 31 15:14:19 crc kubenswrapper[4735]: W0131 15:14:19.933118 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18645daa_dccb_485c_922e_847af9f4c6a0.slice/crio-294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad WatchSource:0}: Error finding container 294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad: Status 404 returned error can't find the container with id 294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.990884 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.990937 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6njv\" (UniqueName: \"kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.990967 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79qs\" (UniqueName: \"kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.991049 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:19 crc kubenswrapper[4735]: I0131 15:14:19.991642 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.009853 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6njv\" (UniqueName: \"kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv\") pod \"glance-db-create-vrsn9\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.059392 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.066578 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2bbb-account-create-update-tjlxh"] Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.093645 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.093762 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79qs\" (UniqueName: \"kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.094857 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.111019 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79qs\" (UniqueName: \"kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs\") pod \"glance-7d1b-account-create-update-6rqlg\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.163787 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.256400 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9jmx2"] Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.382572 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54d7-account-create-update-dqzl9"] Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.539239 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vrsn9"] Jan 31 15:14:20 crc kubenswrapper[4735]: W0131 15:14:20.565337 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49118420_7fc9_4bb6_8bb5_9d90dc2605f0.slice/crio-26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977 WatchSource:0}: Error finding container 26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977: Status 404 returned error can't find the container with id 26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.724175 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7d1b-account-create-update-6rqlg"] Jan 31 15:14:20 crc kubenswrapper[4735]: W0131 15:14:20.858682 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41448eb_e005_42d7_b16d_06a4d829a6b2.slice/crio-848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4 WatchSource:0}: Error finding container 848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4: Status 404 returned error can't find the container with id 848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.878079 4735 generic.go:334] "Generic (PLEG): container finished" podID="074ec696-8193-4cde-a5d1-a1b892a078ab" containerID="6dcc8b2f271b858f059b517d1a66c42ca2b9b6839a1eec104d32ff74a2471c42" exitCode=0 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.878479 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jmx2" event={"ID":"074ec696-8193-4cde-a5d1-a1b892a078ab","Type":"ContainerDied","Data":"6dcc8b2f271b858f059b517d1a66c42ca2b9b6839a1eec104d32ff74a2471c42"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.878507 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jmx2" event={"ID":"074ec696-8193-4cde-a5d1-a1b892a078ab","Type":"ContainerStarted","Data":"92b496fbb38e6658054344f04d2b7d9c65819d3e2fbe9d86799b798143b73701"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.880272 4735 generic.go:334] "Generic (PLEG): container finished" podID="4046fd4c-0329-4995-9691-0fee238a9907" containerID="23baca0f9f19850cf785bc01f2a02841efe9ab9dc350d661019cd8fe67c12a02" exitCode=0 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.880343 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54d7-account-create-update-dqzl9" event={"ID":"4046fd4c-0329-4995-9691-0fee238a9907","Type":"ContainerDied","Data":"23baca0f9f19850cf785bc01f2a02841efe9ab9dc350d661019cd8fe67c12a02"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.880366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54d7-account-create-update-dqzl9" event={"ID":"4046fd4c-0329-4995-9691-0fee238a9907","Type":"ContainerStarted","Data":"e7c46780767c0c37795df6028069ae633e3e68b1cd22e1d647ae8754e28e2b2f"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.882200 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrsn9" event={"ID":"49118420-7fc9-4bb6-8bb5-9d90dc2605f0","Type":"ContainerStarted","Data":"20a37cfe493a4b03f6e7b1cf7846fb7dadfc89232b92782836aa4a60e9ecd922"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.882237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrsn9" event={"ID":"49118420-7fc9-4bb6-8bb5-9d90dc2605f0","Type":"ContainerStarted","Data":"26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.884385 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7d1b-account-create-update-6rqlg" event={"ID":"b41448eb-e005-42d7-b16d-06a4d829a6b2","Type":"ContainerStarted","Data":"848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.885792 4735 generic.go:334] "Generic (PLEG): container finished" podID="18645daa-dccb-485c-922e-847af9f4c6a0" containerID="771602732d55e74d2e20e2504813f92fa41341d8163e55aa8691445729447e28" exitCode=0 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.885845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9wxj" event={"ID":"18645daa-dccb-485c-922e-847af9f4c6a0","Type":"ContainerDied","Data":"771602732d55e74d2e20e2504813f92fa41341d8163e55aa8691445729447e28"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.885864 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9wxj" event={"ID":"18645daa-dccb-485c-922e-847af9f4c6a0","Type":"ContainerStarted","Data":"294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.887330 4735 generic.go:334] "Generic (PLEG): container finished" podID="4dc83c3a-8612-4615-9c72-64e73fd22e8a" containerID="1714d1843e4767f6d87e1ba2028f3fe764d64783562808b9a70159b25869d64d" exitCode=0 Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.887361 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2bbb-account-create-update-tjlxh" event={"ID":"4dc83c3a-8612-4615-9c72-64e73fd22e8a","Type":"ContainerDied","Data":"1714d1843e4767f6d87e1ba2028f3fe764d64783562808b9a70159b25869d64d"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.887378 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2bbb-account-create-update-tjlxh" event={"ID":"4dc83c3a-8612-4615-9c72-64e73fd22e8a","Type":"ContainerStarted","Data":"934cfb4bf89dedbbd124d82c722856941693304b8de55daab562a4f924c9e523"} Jan 31 15:14:20 crc kubenswrapper[4735]: I0131 15:14:20.948224 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vrsn9" podStartSLOduration=1.948204064 podStartE2EDuration="1.948204064s" podCreationTimestamp="2026-01-31 15:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:20.917119785 +0000 UTC m=+946.690448847" watchObservedRunningTime="2026-01-31 15:14:20.948204064 +0000 UTC m=+946.721533116" Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.470793 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.904221 4735 generic.go:334] "Generic (PLEG): container finished" podID="b41448eb-e005-42d7-b16d-06a4d829a6b2" containerID="c1d8a66fc16f4e29f1ce645b96d4c8da320ba74f27459422770ace60ffb9e2d0" exitCode=0 Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.904305 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7d1b-account-create-update-6rqlg" event={"ID":"b41448eb-e005-42d7-b16d-06a4d829a6b2","Type":"ContainerDied","Data":"c1d8a66fc16f4e29f1ce645b96d4c8da320ba74f27459422770ace60ffb9e2d0"} Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.911159 4735 generic.go:334] "Generic (PLEG): container finished" podID="49118420-7fc9-4bb6-8bb5-9d90dc2605f0" containerID="20a37cfe493a4b03f6e7b1cf7846fb7dadfc89232b92782836aa4a60e9ecd922" exitCode=0 Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.911362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrsn9" event={"ID":"49118420-7fc9-4bb6-8bb5-9d90dc2605f0","Type":"ContainerDied","Data":"20a37cfe493a4b03f6e7b1cf7846fb7dadfc89232b92782836aa4a60e9ecd922"} Jan 31 15:14:21 crc kubenswrapper[4735]: I0131 15:14:21.927767 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-mppk9" podUID="bdb13875-9a6e-4ee1-990c-7546ceb5ebd8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.286843 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.440120 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts\") pod \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.440223 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9\") pod \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\" (UID: \"4dc83c3a-8612-4615-9c72-64e73fd22e8a\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.442719 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dc83c3a-8612-4615-9c72-64e73fd22e8a" (UID: "4dc83c3a-8612-4615-9c72-64e73fd22e8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.469150 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.479129 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.484119 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.542309 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts\") pod \"18645daa-dccb-485c-922e-847af9f4c6a0\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.542348 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k769t\" (UniqueName: \"kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t\") pod \"4046fd4c-0329-4995-9691-0fee238a9907\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.542469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts\") pod \"4046fd4c-0329-4995-9691-0fee238a9907\" (UID: \"4046fd4c-0329-4995-9691-0fee238a9907\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.542568 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74928\" (UniqueName: \"kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928\") pod \"18645daa-dccb-485c-922e-847af9f4c6a0\" (UID: \"18645daa-dccb-485c-922e-847af9f4c6a0\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.542910 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dc83c3a-8612-4615-9c72-64e73fd22e8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.553608 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18645daa-dccb-485c-922e-847af9f4c6a0" (UID: "18645daa-dccb-485c-922e-847af9f4c6a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.554821 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4046fd4c-0329-4995-9691-0fee238a9907" (UID: "4046fd4c-0329-4995-9691-0fee238a9907"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.610714 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.643539 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts\") pod \"074ec696-8193-4cde-a5d1-a1b892a078ab\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.643601 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw9x2\" (UniqueName: \"kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2\") pod \"074ec696-8193-4cde-a5d1-a1b892a078ab\" (UID: \"074ec696-8193-4cde-a5d1-a1b892a078ab\") " Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.644094 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18645daa-dccb-485c-922e-847af9f4c6a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.644114 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4046fd4c-0329-4995-9691-0fee238a9907-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.644965 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "074ec696-8193-4cde-a5d1-a1b892a078ab" (UID: "074ec696-8193-4cde-a5d1-a1b892a078ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.745252 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074ec696-8193-4cde-a5d1-a1b892a078ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.918037 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g9wxj" event={"ID":"18645daa-dccb-485c-922e-847af9f4c6a0","Type":"ContainerDied","Data":"294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad"} Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.918073 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294c6815af3e258ae3b55f1a69184bf1c9068589f8fae8f54c95b94ed32106ad" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.918126 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g9wxj" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.919355 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2bbb-account-create-update-tjlxh" event={"ID":"4dc83c3a-8612-4615-9c72-64e73fd22e8a","Type":"ContainerDied","Data":"934cfb4bf89dedbbd124d82c722856941693304b8de55daab562a4f924c9e523"} Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.919377 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934cfb4bf89dedbbd124d82c722856941693304b8de55daab562a4f924c9e523" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.919409 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2bbb-account-create-update-tjlxh" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.920443 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9jmx2" event={"ID":"074ec696-8193-4cde-a5d1-a1b892a078ab","Type":"ContainerDied","Data":"92b496fbb38e6658054344f04d2b7d9c65819d3e2fbe9d86799b798143b73701"} Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.920465 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92b496fbb38e6658054344f04d2b7d9c65819d3e2fbe9d86799b798143b73701" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.920498 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9jmx2" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.921650 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54d7-account-create-update-dqzl9" Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.921761 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54d7-account-create-update-dqzl9" event={"ID":"4046fd4c-0329-4995-9691-0fee238a9907","Type":"ContainerDied","Data":"e7c46780767c0c37795df6028069ae633e3e68b1cd22e1d647ae8754e28e2b2f"} Jan 31 15:14:22 crc kubenswrapper[4735]: I0131 15:14:22.921780 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c46780767c0c37795df6028069ae633e3e68b1cd22e1d647ae8754e28e2b2f" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.372514 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t" (OuterVolumeSpecName: "kube-api-access-k769t") pod "4046fd4c-0329-4995-9691-0fee238a9907" (UID: "4046fd4c-0329-4995-9691-0fee238a9907"). InnerVolumeSpecName "kube-api-access-k769t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.372697 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2" (OuterVolumeSpecName: "kube-api-access-hw9x2") pod "074ec696-8193-4cde-a5d1-a1b892a078ab" (UID: "074ec696-8193-4cde-a5d1-a1b892a078ab"). InnerVolumeSpecName "kube-api-access-hw9x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.373363 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9" (OuterVolumeSpecName: "kube-api-access-vtlz9") pod "4dc83c3a-8612-4615-9c72-64e73fd22e8a" (UID: "4dc83c3a-8612-4615-9c72-64e73fd22e8a"). InnerVolumeSpecName "kube-api-access-vtlz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.391656 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928" (OuterVolumeSpecName: "kube-api-access-74928") pod "18645daa-dccb-485c-922e-847af9f4c6a0" (UID: "18645daa-dccb-485c-922e-847af9f4c6a0"). InnerVolumeSpecName "kube-api-access-74928". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.469491 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k769t\" (UniqueName: \"kubernetes.io/projected/4046fd4c-0329-4995-9691-0fee238a9907-kube-api-access-k769t\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.469532 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw9x2\" (UniqueName: \"kubernetes.io/projected/074ec696-8193-4cde-a5d1-a1b892a078ab-kube-api-access-hw9x2\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.469551 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtlz9\" (UniqueName: \"kubernetes.io/projected/4dc83c3a-8612-4615-9c72-64e73fd22e8a-kube-api-access-vtlz9\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.469569 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74928\" (UniqueName: \"kubernetes.io/projected/18645daa-dccb-485c-922e-847af9f4c6a0-kube-api-access-74928\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.772412 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.779722 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.877691 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts\") pod \"b41448eb-e005-42d7-b16d-06a4d829a6b2\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.877847 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79qs\" (UniqueName: \"kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs\") pod \"b41448eb-e005-42d7-b16d-06a4d829a6b2\" (UID: \"b41448eb-e005-42d7-b16d-06a4d829a6b2\") " Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.878449 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b41448eb-e005-42d7-b16d-06a4d829a6b2" (UID: "b41448eb-e005-42d7-b16d-06a4d829a6b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.889715 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs" (OuterVolumeSpecName: "kube-api-access-h79qs") pod "b41448eb-e005-42d7-b16d-06a4d829a6b2" (UID: "b41448eb-e005-42d7-b16d-06a4d829a6b2"). InnerVolumeSpecName "kube-api-access-h79qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.956522 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vrsn9" event={"ID":"49118420-7fc9-4bb6-8bb5-9d90dc2605f0","Type":"ContainerDied","Data":"26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977"} Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.956567 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26444f298726e9cdff6d90622eeac4095a3d57f77f431981d8c91295568f1977" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.957750 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vrsn9" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.962737 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7d1b-account-create-update-6rqlg" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.963284 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7d1b-account-create-update-6rqlg" event={"ID":"b41448eb-e005-42d7-b16d-06a4d829a6b2","Type":"ContainerDied","Data":"848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4"} Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.963418 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848b310cd82afc15afac9c48067ef5f8c6b17f043efff96db4ca38be50c900f4" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.979704 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6njv\" (UniqueName: \"kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv\") pod \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.979870 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts\") pod \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\" (UID: \"49118420-7fc9-4bb6-8bb5-9d90dc2605f0\") " Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.980246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49118420-7fc9-4bb6-8bb5-9d90dc2605f0" (UID: "49118420-7fc9-4bb6-8bb5-9d90dc2605f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.980279 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b41448eb-e005-42d7-b16d-06a4d829a6b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.980297 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79qs\" (UniqueName: \"kubernetes.io/projected/b41448eb-e005-42d7-b16d-06a4d829a6b2-kube-api-access-h79qs\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:23 crc kubenswrapper[4735]: I0131 15:14:23.982619 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv" (OuterVolumeSpecName: "kube-api-access-d6njv") pod "49118420-7fc9-4bb6-8bb5-9d90dc2605f0" (UID: "49118420-7fc9-4bb6-8bb5-9d90dc2605f0"). InnerVolumeSpecName "kube-api-access-d6njv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:24 crc kubenswrapper[4735]: I0131 15:14:24.081996 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6njv\" (UniqueName: \"kubernetes.io/projected/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-kube-api-access-d6njv\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:24 crc kubenswrapper[4735]: I0131 15:14:24.082031 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49118420-7fc9-4bb6-8bb5-9d90dc2605f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:24 crc kubenswrapper[4735]: I0131 15:14:24.973332 4735 generic.go:334] "Generic (PLEG): container finished" podID="83dc2b98-a7e1-4654-95cf-fd37532fa571" containerID="cab58b39179bf36bfd44d467a84637c109f7614ec1f170814dc017c2cb8085d1" exitCode=0 Jan 31 15:14:24 crc kubenswrapper[4735]: I0131 15:14:24.973453 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6np6m" event={"ID":"83dc2b98-a7e1-4654-95cf-fd37532fa571","Type":"ContainerDied","Data":"cab58b39179bf36bfd44d467a84637c109f7614ec1f170814dc017c2cb8085d1"} Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.080633 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6l488"] Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.080976 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18645daa-dccb-485c-922e-847af9f4c6a0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.080993 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="18645daa-dccb-485c-922e-847af9f4c6a0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.081013 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc83c3a-8612-4615-9c72-64e73fd22e8a" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081020 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc83c3a-8612-4615-9c72-64e73fd22e8a" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.081049 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49118420-7fc9-4bb6-8bb5-9d90dc2605f0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081057 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="49118420-7fc9-4bb6-8bb5-9d90dc2605f0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.081069 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41448eb-e005-42d7-b16d-06a4d829a6b2" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081076 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41448eb-e005-42d7-b16d-06a4d829a6b2" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.081090 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046fd4c-0329-4995-9691-0fee238a9907" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081098 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046fd4c-0329-4995-9691-0fee238a9907" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: E0131 15:14:25.081113 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ec696-8193-4cde-a5d1-a1b892a078ab" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081121 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ec696-8193-4cde-a5d1-a1b892a078ab" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081296 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ec696-8193-4cde-a5d1-a1b892a078ab" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081319 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4046fd4c-0329-4995-9691-0fee238a9907" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081336 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41448eb-e005-42d7-b16d-06a4d829a6b2" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081345 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc83c3a-8612-4615-9c72-64e73fd22e8a" containerName="mariadb-account-create-update" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081354 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="18645daa-dccb-485c-922e-847af9f4c6a0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081363 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="49118420-7fc9-4bb6-8bb5-9d90dc2605f0" containerName="mariadb-database-create" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.081989 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.084868 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.085178 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qwr7x" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.098673 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.098723 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.098803 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.098848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9x7\" (UniqueName: \"kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.106289 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6l488"] Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.200442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.200986 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.201057 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.201092 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9x7\" (UniqueName: \"kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.206987 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.208079 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.215921 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.218175 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9x7\" (UniqueName: \"kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7\") pod \"glance-db-sync-6l488\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.404989 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6l488" Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.979093 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6l488"] Jan 31 15:14:25 crc kubenswrapper[4735]: I0131 15:14:25.982859 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6l488" event={"ID":"03d29a2f-1ba4-48e8-8c33-c1a96440ae36","Type":"ContainerStarted","Data":"c869bc5731ae5c35f2d97b3d05817c3bb28aa1cfc2f93a61eb34d5c34f591288"} Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.314809 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444309 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444476 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444583 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444634 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444701 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6c95\" (UniqueName: \"kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444787 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.444837 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices\") pod \"83dc2b98-a7e1-4654-95cf-fd37532fa571\" (UID: \"83dc2b98-a7e1-4654-95cf-fd37532fa571\") " Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.445546 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.446556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.454337 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95" (OuterVolumeSpecName: "kube-api-access-p6c95") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "kube-api-access-p6c95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.455351 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.481361 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts" (OuterVolumeSpecName: "scripts") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.483663 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.486752 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "83dc2b98-a7e1-4654-95cf-fd37532fa571" (UID: "83dc2b98-a7e1-4654-95cf-fd37532fa571"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.546877 4735 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83dc2b98-a7e1-4654-95cf-fd37532fa571-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547098 4735 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547116 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6c95\" (UniqueName: \"kubernetes.io/projected/83dc2b98-a7e1-4654-95cf-fd37532fa571-kube-api-access-p6c95\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547131 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547143 4735 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547155 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83dc2b98-a7e1-4654-95cf-fd37532fa571-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.547164 4735 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83dc2b98-a7e1-4654-95cf-fd37532fa571-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.991675 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6np6m" event={"ID":"83dc2b98-a7e1-4654-95cf-fd37532fa571","Type":"ContainerDied","Data":"0e61f45d0ef59c90cc8103e2726f908240d1cc9a86f7ab3fbc3f29e1ed7e5a3f"} Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.991710 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e61f45d0ef59c90cc8103e2726f908240d1cc9a86f7ab3fbc3f29e1ed7e5a3f" Jan 31 15:14:26 crc kubenswrapper[4735]: I0131 15:14:26.991759 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6np6m" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.691278 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pm82k"] Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.699310 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pm82k"] Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.705714 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4vc7k"] Jan 31 15:14:27 crc kubenswrapper[4735]: E0131 15:14:27.706099 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dc2b98-a7e1-4654-95cf-fd37532fa571" containerName="swift-ring-rebalance" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.706114 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dc2b98-a7e1-4654-95cf-fd37532fa571" containerName="swift-ring-rebalance" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.706291 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dc2b98-a7e1-4654-95cf-fd37532fa571" containerName="swift-ring-rebalance" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.706905 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.711626 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vc7k"] Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.711768 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.790335 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.790763 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mcsr\" (UniqueName: \"kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.891888 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mcsr\" (UniqueName: \"kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.892002 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.892764 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:27 crc kubenswrapper[4735]: I0131 15:14:27.907787 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mcsr\" (UniqueName: \"kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr\") pod \"root-account-create-update-4vc7k\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:28 crc kubenswrapper[4735]: I0131 15:14:28.031056 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:28 crc kubenswrapper[4735]: I0131 15:14:28.470603 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vc7k"] Jan 31 15:14:28 crc kubenswrapper[4735]: W0131 15:14:28.476914 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328eb3c0_bc56_40b1_88f4_e84da75dcffa.slice/crio-d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318 WatchSource:0}: Error finding container d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318: Status 404 returned error can't find the container with id d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318 Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.010816 4735 generic.go:334] "Generic (PLEG): container finished" podID="328eb3c0-bc56-40b1-88f4-e84da75dcffa" containerID="67e287aeb9cc826d9213d44f56a2284064f60b580e3d1baf5808322cda6bcf3c" exitCode=0 Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.010948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vc7k" event={"ID":"328eb3c0-bc56-40b1-88f4-e84da75dcffa","Type":"ContainerDied","Data":"67e287aeb9cc826d9213d44f56a2284064f60b580e3d1baf5808322cda6bcf3c"} Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.012759 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vc7k" event={"ID":"328eb3c0-bc56-40b1-88f4-e84da75dcffa","Type":"ContainerStarted","Data":"d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318"} Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.528992 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2vhbk" podUID="07524504-28f6-44cc-8630-2e736f87ff3d" containerName="ovn-controller" probeResult="failure" output=< Jan 31 15:14:29 crc kubenswrapper[4735]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 15:14:29 crc kubenswrapper[4735]: > Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.551274 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2dce49-5dc7-4fa6-b0ef-9462616eb57c" path="/var/lib/kubelet/pods/8c2dce49-5dc7-4fa6-b0ef-9462616eb57c/volumes" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.552009 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.552052 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v8dt8" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.756541 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2vhbk-config-6b5vh"] Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.757843 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.764464 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2vhbk-config-6b5vh"] Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.764847 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.925986 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.926117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g457l\" (UniqueName: \"kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.926264 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.926327 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.926506 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:29 crc kubenswrapper[4735]: I0131 15:14:29.926571 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.025302 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerID="70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0" exitCode=0 Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.025395 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerDied","Data":"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0"} Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.028954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.028996 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.029095 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.029130 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g457l\" (UniqueName: \"kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.029187 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.029213 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.029655 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.031874 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.032034 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.032100 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.032793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.105718 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g457l\" (UniqueName: \"kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l\") pod \"ovn-controller-2vhbk-config-6b5vh\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.385612 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.486390 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.609359 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2vhbk-config-6b5vh"] Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.660851 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts\") pod \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.660958 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mcsr\" (UniqueName: \"kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr\") pod \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\" (UID: \"328eb3c0-bc56-40b1-88f4-e84da75dcffa\") " Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.661651 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "328eb3c0-bc56-40b1-88f4-e84da75dcffa" (UID: "328eb3c0-bc56-40b1-88f4-e84da75dcffa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.666545 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr" (OuterVolumeSpecName: "kube-api-access-5mcsr") pod "328eb3c0-bc56-40b1-88f4-e84da75dcffa" (UID: "328eb3c0-bc56-40b1-88f4-e84da75dcffa"). InnerVolumeSpecName "kube-api-access-5mcsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.763580 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/328eb3c0-bc56-40b1-88f4-e84da75dcffa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:30 crc kubenswrapper[4735]: I0131 15:14:30.763645 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mcsr\" (UniqueName: \"kubernetes.io/projected/328eb3c0-bc56-40b1-88f4-e84da75dcffa-kube-api-access-5mcsr\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.035082 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk-config-6b5vh" event={"ID":"a42dc430-450f-449d-a012-40a07faa5765","Type":"ContainerStarted","Data":"718bcd90643fb57832dc634a3474c2a388fad427619adc8da4a29634472564b2"} Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.035401 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk-config-6b5vh" event={"ID":"a42dc430-450f-449d-a012-40a07faa5765","Type":"ContainerStarted","Data":"01014b2160ec15279a1a1fef497bfacd81529b9cb331bb05ab0548a1e042a962"} Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.037962 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vc7k" event={"ID":"328eb3c0-bc56-40b1-88f4-e84da75dcffa","Type":"ContainerDied","Data":"d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318"} Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.037990 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06b719ac069372975a6dacff78d30dcc6aa88ce8b89979b093a90abada83318" Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.038052 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vc7k" Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.054459 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerStarted","Data":"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9"} Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.054703 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.055562 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2vhbk-config-6b5vh" podStartSLOduration=2.055531719 podStartE2EDuration="2.055531719s" podCreationTimestamp="2026-01-31 15:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:31.051141125 +0000 UTC m=+956.824470167" watchObservedRunningTime="2026-01-31 15:14:31.055531719 +0000 UTC m=+956.828860761" Jan 31 15:14:31 crc kubenswrapper[4735]: I0131 15:14:31.081920 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.227138515 podStartE2EDuration="1m17.081899644s" podCreationTimestamp="2026-01-31 15:13:14 +0000 UTC" firstStartedPulling="2026-01-31 15:13:16.993444385 +0000 UTC m=+882.766773427" lastFinishedPulling="2026-01-31 15:13:55.848205514 +0000 UTC m=+921.621534556" observedRunningTime="2026-01-31 15:14:31.081451162 +0000 UTC m=+956.854780224" watchObservedRunningTime="2026-01-31 15:14:31.081899644 +0000 UTC m=+956.855228686" Jan 31 15:14:32 crc kubenswrapper[4735]: I0131 15:14:32.076041 4735 generic.go:334] "Generic (PLEG): container finished" podID="a42dc430-450f-449d-a012-40a07faa5765" containerID="718bcd90643fb57832dc634a3474c2a388fad427619adc8da4a29634472564b2" exitCode=0 Jan 31 15:14:32 crc kubenswrapper[4735]: I0131 15:14:32.076255 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk-config-6b5vh" event={"ID":"a42dc430-450f-449d-a012-40a07faa5765","Type":"ContainerDied","Data":"718bcd90643fb57832dc634a3474c2a388fad427619adc8da4a29634472564b2"} Jan 31 15:14:34 crc kubenswrapper[4735]: I0131 15:14:34.525110 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2vhbk" Jan 31 15:14:34 crc kubenswrapper[4735]: I0131 15:14:34.729271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:34 crc kubenswrapper[4735]: I0131 15:14:34.741246 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ca8c406-9ce6-427d-94ab-293bb0cb4c86-etc-swift\") pod \"swift-storage-0\" (UID: \"5ca8c406-9ce6-427d-94ab-293bb0cb4c86\") " pod="openstack/swift-storage-0" Jan 31 15:14:34 crc kubenswrapper[4735]: I0131 15:14:34.902096 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 15:14:35 crc kubenswrapper[4735]: I0131 15:14:35.109970 4735 generic.go:334] "Generic (PLEG): container finished" podID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerID="2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6" exitCode=0 Jan 31 15:14:35 crc kubenswrapper[4735]: I0131 15:14:35.110036 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerDied","Data":"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6"} Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.346174 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.346728 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.346765 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.347258 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.347300 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264" gracePeriod=600 Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.858849 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.989674 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990021 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990077 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990188 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990279 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990336 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g457l\" (UniqueName: \"kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l\") pod \"a42dc430-450f-449d-a012-40a07faa5765\" (UID: \"a42dc430-450f-449d-a012-40a07faa5765\") " Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990863 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990872 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.990924 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run" (OuterVolumeSpecName: "var-run") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.991385 4735 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.991411 4735 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.991448 4735 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a42dc430-450f-449d-a012-40a07faa5765-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.991524 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.991852 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts" (OuterVolumeSpecName: "scripts") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:37 crc kubenswrapper[4735]: I0131 15:14:37.996578 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l" (OuterVolumeSpecName: "kube-api-access-g457l") pod "a42dc430-450f-449d-a012-40a07faa5765" (UID: "a42dc430-450f-449d-a012-40a07faa5765"). InnerVolumeSpecName "kube-api-access-g457l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.092963 4735 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.092996 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g457l\" (UniqueName: \"kubernetes.io/projected/a42dc430-450f-449d-a012-40a07faa5765-kube-api-access-g457l\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.093008 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a42dc430-450f-449d-a012-40a07faa5765-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.139978 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2vhbk-config-6b5vh" event={"ID":"a42dc430-450f-449d-a012-40a07faa5765","Type":"ContainerDied","Data":"01014b2160ec15279a1a1fef497bfacd81529b9cb331bb05ab0548a1e042a962"} Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.140017 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01014b2160ec15279a1a1fef497bfacd81529b9cb331bb05ab0548a1e042a962" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.140073 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2vhbk-config-6b5vh" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.151764 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerStarted","Data":"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45"} Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.152468 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.164568 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264" exitCode=0 Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.164625 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264"} Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.164658 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c"} Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.164682 4735 scope.go:117] "RemoveContainer" containerID="4468c509f78001cecce931b3b895045b97daddc0962f5716ac023e1697d1d638" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.186797 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371953.667997 podStartE2EDuration="1m23.18677827s" podCreationTimestamp="2026-01-31 15:13:15 +0000 UTC" firstStartedPulling="2026-01-31 15:13:17.147922961 +0000 UTC m=+882.921252003" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:38.185783062 +0000 UTC m=+963.959112114" watchObservedRunningTime="2026-01-31 15:14:38.18677827 +0000 UTC m=+963.960107312" Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.457039 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 15:14:38 crc kubenswrapper[4735]: W0131 15:14:38.465825 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ca8c406_9ce6_427d_94ab_293bb0cb4c86.slice/crio-70ca368e983310cbadd5bd40b28d543c55777ef5a00afc1a3fe41f9d4c809337 WatchSource:0}: Error finding container 70ca368e983310cbadd5bd40b28d543c55777ef5a00afc1a3fe41f9d4c809337: Status 404 returned error can't find the container with id 70ca368e983310cbadd5bd40b28d543c55777ef5a00afc1a3fe41f9d4c809337 Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.975173 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2vhbk-config-6b5vh"] Jan 31 15:14:38 crc kubenswrapper[4735]: I0131 15:14:38.986597 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2vhbk-config-6b5vh"] Jan 31 15:14:39 crc kubenswrapper[4735]: I0131 15:14:39.188406 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"70ca368e983310cbadd5bd40b28d543c55777ef5a00afc1a3fe41f9d4c809337"} Jan 31 15:14:39 crc kubenswrapper[4735]: I0131 15:14:39.191133 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6l488" event={"ID":"03d29a2f-1ba4-48e8-8c33-c1a96440ae36","Type":"ContainerStarted","Data":"ec2c1b870356a5e4021599447fd42bf189a0b19e7629f839cfc7182e4535e1c4"} Jan 31 15:14:39 crc kubenswrapper[4735]: I0131 15:14:39.210166 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6l488" podStartSLOduration=2.1899689589999998 podStartE2EDuration="14.210148911s" podCreationTimestamp="2026-01-31 15:14:25 +0000 UTC" firstStartedPulling="2026-01-31 15:14:25.97707773 +0000 UTC m=+951.750406772" lastFinishedPulling="2026-01-31 15:14:37.997257662 +0000 UTC m=+963.770586724" observedRunningTime="2026-01-31 15:14:39.206452927 +0000 UTC m=+964.979781979" watchObservedRunningTime="2026-01-31 15:14:39.210148911 +0000 UTC m=+964.983477953" Jan 31 15:14:39 crc kubenswrapper[4735]: I0131 15:14:39.553654 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42dc430-450f-449d-a012-40a07faa5765" path="/var/lib/kubelet/pods/a42dc430-450f-449d-a012-40a07faa5765/volumes" Jan 31 15:14:40 crc kubenswrapper[4735]: I0131 15:14:40.200677 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"218df358cc6292caa768a0427d9ddb0708732f6340c500abbe3fe851b2c1c31a"} Jan 31 15:14:40 crc kubenswrapper[4735]: I0131 15:14:40.201454 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"2a06e0b3c554049fa02cb831b6dd265ee3d5d3f29e796bc2689f2c3a2e9e1898"} Jan 31 15:14:41 crc kubenswrapper[4735]: I0131 15:14:41.212868 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"528544794c22663790684077c48bab1b1d28ee6f66cff0cd5ce8e25fb43d0e88"} Jan 31 15:14:41 crc kubenswrapper[4735]: I0131 15:14:41.213223 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"fb05347aa5c7ee5343a16f471f43951bb035a632d36dc04e97bcdc700d5fcd34"} Jan 31 15:14:42 crc kubenswrapper[4735]: I0131 15:14:42.224583 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"95062da8c4dd6f325198e9242696002fc2ba15b15872b22ccaff0f09c144f0fb"} Jan 31 15:14:42 crc kubenswrapper[4735]: I0131 15:14:42.224965 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"e203326abf86216eede10b92c946c5a27859694f3182fa3f650554cdafc02373"} Jan 31 15:14:42 crc kubenswrapper[4735]: I0131 15:14:42.224981 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"59a62d86bdcb6d4d8cd8cd9d1ca270b3b609cff7225ef35af294e0a9083705c6"} Jan 31 15:14:43 crc kubenswrapper[4735]: I0131 15:14:43.239747 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"21cbfcc80fd5799cedd7f04517d98cfefa7d2f1cf18ef9e2d02ef73e9b0f19f2"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256572 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"f4a806e2ec1e2fc232c24fa0834d66c5fb2ae1919315a7d1c06e12fdb245045b"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256941 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"bfda0afc47e126470ec82356a23ce6c3d147cf9bd9bde3d90b1eeb486fcf798f"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256955 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"62d50d1be1151ea5cb525ee38eb2f7fd9e47a8e79c4ecb86330d3dd925933346"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256966 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"683ebc3336f588e4c9e62fe5a931f6b95ae6380250ba5b1b84120e4f1bcc0e23"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256977 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"750aff334937216a440e4b5702976901b08b92554ca4c28382fec62a5eb39e9b"} Jan 31 15:14:44 crc kubenswrapper[4735]: I0131 15:14:44.256989 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"3e098899dbc23c2cc50f89e86eaa1ce5426e1157f0296d7f1de0f1e120ca97d6"} Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.304033 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5ca8c406-9ce6-427d-94ab-293bb0cb4c86","Type":"ContainerStarted","Data":"75b23ebd5083d9c97b96c907980a2bf18e4396d23e1e7af9ad31018815f56624"} Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.313702 4735 generic.go:334] "Generic (PLEG): container finished" podID="03d29a2f-1ba4-48e8-8c33-c1a96440ae36" containerID="ec2c1b870356a5e4021599447fd42bf189a0b19e7629f839cfc7182e4535e1c4" exitCode=0 Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.313769 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6l488" event={"ID":"03d29a2f-1ba4-48e8-8c33-c1a96440ae36","Type":"ContainerDied","Data":"ec2c1b870356a5e4021599447fd42bf189a0b19e7629f839cfc7182e4535e1c4"} Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.371756 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.709471137 podStartE2EDuration="44.371729638s" podCreationTimestamp="2026-01-31 15:14:01 +0000 UTC" firstStartedPulling="2026-01-31 15:14:38.468008431 +0000 UTC m=+964.241337463" lastFinishedPulling="2026-01-31 15:14:43.130266932 +0000 UTC m=+968.903595964" observedRunningTime="2026-01-31 15:14:45.357134505 +0000 UTC m=+971.130463647" watchObservedRunningTime="2026-01-31 15:14:45.371729638 +0000 UTC m=+971.145058720" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.712122 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:45 crc kubenswrapper[4735]: E0131 15:14:45.712887 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42dc430-450f-449d-a012-40a07faa5765" containerName="ovn-config" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.713043 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42dc430-450f-449d-a012-40a07faa5765" containerName="ovn-config" Jan 31 15:14:45 crc kubenswrapper[4735]: E0131 15:14:45.713163 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328eb3c0-bc56-40b1-88f4-e84da75dcffa" containerName="mariadb-account-create-update" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.713273 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="328eb3c0-bc56-40b1-88f4-e84da75dcffa" containerName="mariadb-account-create-update" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.713678 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42dc430-450f-449d-a012-40a07faa5765" containerName="ovn-config" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.713819 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="328eb3c0-bc56-40b1-88f4-e84da75dcffa" containerName="mariadb-account-create-update" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.715121 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.717280 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.736673 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.815665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.815731 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52rs\" (UniqueName: \"kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.815943 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.816142 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.816272 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.816318 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918377 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918456 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918497 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w52rs\" (UniqueName: \"kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.918557 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.919742 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.921561 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.921601 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.921828 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.922581 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:45 crc kubenswrapper[4735]: I0131 15:14:45.949920 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52rs\" (UniqueName: \"kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs\") pod \"dnsmasq-dns-764c5664d7-qvkkz\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.033996 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.429004 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.484050 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.731948 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6l488" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.757461 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jq6db"] Jan 31 15:14:46 crc kubenswrapper[4735]: E0131 15:14:46.757880 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d29a2f-1ba4-48e8-8c33-c1a96440ae36" containerName="glance-db-sync" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.757904 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d29a2f-1ba4-48e8-8c33-c1a96440ae36" containerName="glance-db-sync" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.758102 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d29a2f-1ba4-48e8-8c33-c1a96440ae36" containerName="glance-db-sync" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.758775 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.771885 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jq6db"] Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.830722 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data\") pod \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.830968 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle\") pod \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.831067 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk9x7\" (UniqueName: \"kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7\") pod \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.831150 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data\") pod \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\" (UID: \"03d29a2f-1ba4-48e8-8c33-c1a96440ae36\") " Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.831361 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktk6\" (UniqueName: \"kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.831401 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.839618 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03d29a2f-1ba4-48e8-8c33-c1a96440ae36" (UID: "03d29a2f-1ba4-48e8-8c33-c1a96440ae36"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.839656 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7" (OuterVolumeSpecName: "kube-api-access-gk9x7") pod "03d29a2f-1ba4-48e8-8c33-c1a96440ae36" (UID: "03d29a2f-1ba4-48e8-8c33-c1a96440ae36"). InnerVolumeSpecName "kube-api-access-gk9x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.882308 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b354-account-create-update-dsnth"] Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.882528 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03d29a2f-1ba4-48e8-8c33-c1a96440ae36" (UID: "03d29a2f-1ba4-48e8-8c33-c1a96440ae36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.883367 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.885244 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.899361 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b354-account-create-update-dsnth"] Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.903634 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data" (OuterVolumeSpecName: "config-data") pod "03d29a2f-1ba4-48e8-8c33-c1a96440ae36" (UID: "03d29a2f-1ba4-48e8-8c33-c1a96440ae36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934448 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktk6\" (UniqueName: \"kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934533 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8vb4\" (UniqueName: \"kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934571 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934640 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk9x7\" (UniqueName: \"kubernetes.io/projected/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-kube-api-access-gk9x7\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934651 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934662 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.934670 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d29a2f-1ba4-48e8-8c33-c1a96440ae36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.935977 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.959161 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d2zgp"] Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.960362 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.969488 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktk6\" (UniqueName: \"kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6\") pod \"cinder-db-create-jq6db\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:46 crc kubenswrapper[4735]: I0131 15:14:46.980716 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d2zgp"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.035606 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.035695 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.035727 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8vb4\" (UniqueName: \"kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.035802 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhcb\" (UniqueName: \"kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.036545 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.041744 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vrmlh"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.043727 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.054588 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9777-account-create-update-wjtv8"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.055753 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.060677 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.064598 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8vb4\" (UniqueName: \"kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4\") pod \"cinder-b354-account-create-update-dsnth\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.068272 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9777-account-create-update-wjtv8"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.086461 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vrmlh"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.110020 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137119 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk79h\" (UniqueName: \"kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137194 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr86x\" (UniqueName: \"kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137328 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137384 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhcb\" (UniqueName: \"kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137482 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.137599 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.138549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.152598 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhcb\" (UniqueName: \"kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb\") pod \"barbican-db-create-d2zgp\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.215848 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-79q5z"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.216972 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.228749 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.229105 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.229399 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pfppv" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.234195 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.234391 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.238906 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchvv\" (UniqueName: \"kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.238975 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.239033 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.239114 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk79h\" (UniqueName: \"kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.239152 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr86x\" (UniqueName: \"kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.239189 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.239246 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.240261 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.241190 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.256152 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-79q5z"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.276885 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk79h\" (UniqueName: \"kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h\") pod \"neutron-db-create-vrmlh\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.281349 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr86x\" (UniqueName: \"kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x\") pod \"barbican-9777-account-create-update-wjtv8\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.287589 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e746-account-create-update-djp42"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.288843 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.291171 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.299176 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e746-account-create-update-djp42"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.332076 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6l488" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.332908 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6l488" event={"ID":"03d29a2f-1ba4-48e8-8c33-c1a96440ae36","Type":"ContainerDied","Data":"c869bc5731ae5c35f2d97b3d05817c3bb28aa1cfc2f93a61eb34d5c34f591288"} Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.332949 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c869bc5731ae5c35f2d97b3d05817c3bb28aa1cfc2f93a61eb34d5c34f591288" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.334406 4735 generic.go:334] "Generic (PLEG): container finished" podID="012685c6-c405-4c79-806a-084aee6d5f70" containerID="7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692" exitCode=0 Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.334450 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" event={"ID":"012685c6-c405-4c79-806a-084aee6d5f70","Type":"ContainerDied","Data":"7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692"} Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.334470 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" event={"ID":"012685c6-c405-4c79-806a-084aee6d5f70","Type":"ContainerStarted","Data":"56c74f7aed82c62b073c6e8b1b614a07fbf55db6ee87112c39f373385d633cba"} Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.345333 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54fp\" (UniqueName: \"kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.345510 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.345622 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bchvv\" (UniqueName: \"kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.345711 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.345767 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.351878 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.352004 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.370789 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.384168 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bchvv\" (UniqueName: \"kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv\") pod \"keystone-db-sync-79q5z\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.412151 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.423932 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.449627 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54fp\" (UniqueName: \"kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.449775 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.450653 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.472138 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54fp\" (UniqueName: \"kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp\") pod \"neutron-e746-account-create-update-djp42\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.513305 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jq6db"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.545802 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:47 crc kubenswrapper[4735]: W0131 15:14:47.583503 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcecf62c3_6e9c_44cb_9963_6ac8a95baa14.slice/crio-09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a WatchSource:0}: Error finding container 09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a: Status 404 returned error can't find the container with id 09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.617703 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.840886 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.892116 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.893602 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.906595 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980298 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980343 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980363 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lqj\" (UniqueName: \"kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980438 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:47 crc kubenswrapper[4735]: I0131 15:14:47.980492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.044406 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b354-account-create-update-dsnth"] Jan 31 15:14:48 crc kubenswrapper[4735]: W0131 15:14:48.050841 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6922b6e_24c3_4a4d_99fd_7027a8b33273.slice/crio-b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5 WatchSource:0}: Error finding container b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5: Status 404 returned error can't find the container with id b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5 Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.081811 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.081882 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.081943 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.082001 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.082027 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.082049 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lqj\" (UniqueName: \"kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.082956 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.083206 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.083386 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.084926 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.095753 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.110154 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lqj\" (UniqueName: \"kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj\") pod \"dnsmasq-dns-74f6bcbc87-g5j87\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.184207 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d2zgp"] Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.236785 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.278361 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9777-account-create-update-wjtv8"] Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.286742 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vrmlh"] Jan 31 15:14:48 crc kubenswrapper[4735]: W0131 15:14:48.288951 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72d3a05_ea57_4446_96f6_731172bde4a3.slice/crio-d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8 WatchSource:0}: Error finding container d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8: Status 404 returned error can't find the container with id d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8 Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.363639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" event={"ID":"012685c6-c405-4c79-806a-084aee6d5f70","Type":"ContainerStarted","Data":"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.363823 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="dnsmasq-dns" containerID="cri-o://343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797" gracePeriod=10 Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.364037 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.369022 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b354-account-create-update-dsnth" event={"ID":"f6922b6e-24c3-4a4d-99fd-7027a8b33273","Type":"ContainerStarted","Data":"af6c5a0e63026f181a446ddc54de04ae0289ecdcfd8098db2e5f9e26c5fe0ae6"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.369048 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b354-account-create-update-dsnth" event={"ID":"f6922b6e-24c3-4a4d-99fd-7027a8b33273","Type":"ContainerStarted","Data":"b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.370622 4735 generic.go:334] "Generic (PLEG): container finished" podID="cecf62c3-6e9c-44cb-9963-6ac8a95baa14" containerID="99c6677f1a611414472452da90d84a508b8b8138917ee7bd4d2772860d17b364" exitCode=0 Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.370667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jq6db" event={"ID":"cecf62c3-6e9c-44cb-9963-6ac8a95baa14","Type":"ContainerDied","Data":"99c6677f1a611414472452da90d84a508b8b8138917ee7bd4d2772860d17b364"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.370682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jq6db" event={"ID":"cecf62c3-6e9c-44cb-9963-6ac8a95baa14","Type":"ContainerStarted","Data":"09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.372506 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d2zgp" event={"ID":"41b16dfd-fefb-49ee-adde-9d244ca8ccbe","Type":"ContainerStarted","Data":"31d9970dbfca0e1b36be321baf4e508be4c888b401fa7fac7f85ac14c00426ad"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.377089 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrmlh" event={"ID":"e72d3a05-ea57-4446-96f6-731172bde4a3","Type":"ContainerStarted","Data":"d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8"} Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.382595 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" podStartSLOduration=3.382578575 podStartE2EDuration="3.382578575s" podCreationTimestamp="2026-01-31 15:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:48.381587547 +0000 UTC m=+974.154916589" watchObservedRunningTime="2026-01-31 15:14:48.382578575 +0000 UTC m=+974.155907617" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.409961 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b354-account-create-update-dsnth" podStartSLOduration=2.4099393080000002 podStartE2EDuration="2.409939308s" podCreationTimestamp="2026-01-31 15:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:48.402134888 +0000 UTC m=+974.175463940" watchObservedRunningTime="2026-01-31 15:14:48.409939308 +0000 UTC m=+974.183268350" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.470746 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e746-account-create-update-djp42"] Jan 31 15:14:48 crc kubenswrapper[4735]: W0131 15:14:48.490467 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0bc38a4_6212_4fa2_afb1_e8da3e3271a6.slice/crio-b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1 WatchSource:0}: Error finding container b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1: Status 404 returned error can't find the container with id b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1 Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.501673 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-79q5z"] Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.770519 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.865972 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899353 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899436 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899536 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899570 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899632 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w52rs\" (UniqueName: \"kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.899762 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config\") pod \"012685c6-c405-4c79-806a-084aee6d5f70\" (UID: \"012685c6-c405-4c79-806a-084aee6d5f70\") " Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.917968 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs" (OuterVolumeSpecName: "kube-api-access-w52rs") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "kube-api-access-w52rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.952480 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.963169 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.963867 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.978906 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:48 crc kubenswrapper[4735]: I0131 15:14:48.983042 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config" (OuterVolumeSpecName: "config") pod "012685c6-c405-4c79-806a-084aee6d5f70" (UID: "012685c6-c405-4c79-806a-084aee6d5f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001514 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001551 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001566 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001577 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001590 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012685c6-c405-4c79-806a-084aee6d5f70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.001601 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w52rs\" (UniqueName: \"kubernetes.io/projected/012685c6-c405-4c79-806a-084aee6d5f70-kube-api-access-w52rs\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.389541 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b376088-4875-44ac-a0b2-2e80ffa08acf" containerID="f79f4e3ecbbdea61a0e862aedc731a56ae711faafccecf904854c1e09d472ae5" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.389658 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9777-account-create-update-wjtv8" event={"ID":"2b376088-4875-44ac-a0b2-2e80ffa08acf","Type":"ContainerDied","Data":"f79f4e3ecbbdea61a0e862aedc731a56ae711faafccecf904854c1e09d472ae5"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.389966 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9777-account-create-update-wjtv8" event={"ID":"2b376088-4875-44ac-a0b2-2e80ffa08acf","Type":"ContainerStarted","Data":"bc8a3882023d24a410db99a2891189500fbcbbf6a1e2c61c2fdcfeb26fb84161"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.396029 4735 generic.go:334] "Generic (PLEG): container finished" podID="e72d3a05-ea57-4446-96f6-731172bde4a3" containerID="b0fce810328aa92d3f4797cca576c62b76d3330c47dbf3fd3beb9e086c82c08a" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.396203 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrmlh" event={"ID":"e72d3a05-ea57-4446-96f6-731172bde4a3","Type":"ContainerDied","Data":"b0fce810328aa92d3f4797cca576c62b76d3330c47dbf3fd3beb9e086c82c08a"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.398951 4735 generic.go:334] "Generic (PLEG): container finished" podID="d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" containerID="624bfdd3a51382b6969fa68ed12cc671dac248d54dcd0b2e25aab0853df3ceef" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.399077 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e746-account-create-update-djp42" event={"ID":"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6","Type":"ContainerDied","Data":"624bfdd3a51382b6969fa68ed12cc671dac248d54dcd0b2e25aab0853df3ceef"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.399146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e746-account-create-update-djp42" event={"ID":"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6","Type":"ContainerStarted","Data":"b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.404231 4735 generic.go:334] "Generic (PLEG): container finished" podID="012685c6-c405-4c79-806a-084aee6d5f70" containerID="343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.404322 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.404355 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" event={"ID":"012685c6-c405-4c79-806a-084aee6d5f70","Type":"ContainerDied","Data":"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.404466 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qvkkz" event={"ID":"012685c6-c405-4c79-806a-084aee6d5f70","Type":"ContainerDied","Data":"56c74f7aed82c62b073c6e8b1b614a07fbf55db6ee87112c39f373385d633cba"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.404489 4735 scope.go:117] "RemoveContainer" containerID="343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.407825 4735 generic.go:334] "Generic (PLEG): container finished" podID="f746a606-2110-4dc1-8724-5fc004908bec" containerID="42c72b63d8ea520a4212a7114d0922fea2c3c7f32e69a47cd354fd8886499686" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.408107 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" event={"ID":"f746a606-2110-4dc1-8724-5fc004908bec","Type":"ContainerDied","Data":"42c72b63d8ea520a4212a7114d0922fea2c3c7f32e69a47cd354fd8886499686"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.408151 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" event={"ID":"f746a606-2110-4dc1-8724-5fc004908bec","Type":"ContainerStarted","Data":"515d9cd818f26866ade682633d9e314f9e25c1fc278ac4cfe553d22ea4d68982"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.420121 4735 generic.go:334] "Generic (PLEG): container finished" podID="f6922b6e-24c3-4a4d-99fd-7027a8b33273" containerID="af6c5a0e63026f181a446ddc54de04ae0289ecdcfd8098db2e5f9e26c5fe0ae6" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.420209 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b354-account-create-update-dsnth" event={"ID":"f6922b6e-24c3-4a4d-99fd-7027a8b33273","Type":"ContainerDied","Data":"af6c5a0e63026f181a446ddc54de04ae0289ecdcfd8098db2e5f9e26c5fe0ae6"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.426268 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-79q5z" event={"ID":"893a5a30-2ca2-4d47-9882-2bf19e0233ad","Type":"ContainerStarted","Data":"733b7d0cfac48eb0dec788f674b067cdd00fc8e4cb7f578aba078b8362c3e849"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.433947 4735 generic.go:334] "Generic (PLEG): container finished" podID="41b16dfd-fefb-49ee-adde-9d244ca8ccbe" containerID="87ba1d3c10ce22688b82db1035519bc40e7da0cd4bc6de78c90cb0340a879cf8" exitCode=0 Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.434277 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d2zgp" event={"ID":"41b16dfd-fefb-49ee-adde-9d244ca8ccbe","Type":"ContainerDied","Data":"87ba1d3c10ce22688b82db1035519bc40e7da0cd4bc6de78c90cb0340a879cf8"} Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.443204 4735 scope.go:117] "RemoveContainer" containerID="7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.574164 4735 scope.go:117] "RemoveContainer" containerID="343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797" Jan 31 15:14:49 crc kubenswrapper[4735]: E0131 15:14:49.577626 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797\": container with ID starting with 343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797 not found: ID does not exist" containerID="343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.577702 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797"} err="failed to get container status \"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797\": rpc error: code = NotFound desc = could not find container \"343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797\": container with ID starting with 343c242aa16c44e39efff0d8ef15e51745152f203f85c5db17c988e36de12797 not found: ID does not exist" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.577741 4735 scope.go:117] "RemoveContainer" containerID="7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692" Jan 31 15:14:49 crc kubenswrapper[4735]: E0131 15:14:49.585861 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692\": container with ID starting with 7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692 not found: ID does not exist" containerID="7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.585900 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692"} err="failed to get container status \"7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692\": rpc error: code = NotFound desc = could not find container \"7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692\": container with ID starting with 7d921d07cd428c20d2c7538830453d91b84ff02ac66c0f3fe83e2be17a325692 not found: ID does not exist" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.589395 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.599531 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qvkkz"] Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.762704 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.820846 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts\") pod \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.821329 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktk6\" (UniqueName: \"kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6\") pod \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\" (UID: \"cecf62c3-6e9c-44cb-9963-6ac8a95baa14\") " Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.822039 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cecf62c3-6e9c-44cb-9963-6ac8a95baa14" (UID: "cecf62c3-6e9c-44cb-9963-6ac8a95baa14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.825267 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6" (OuterVolumeSpecName: "kube-api-access-zktk6") pod "cecf62c3-6e9c-44cb-9963-6ac8a95baa14" (UID: "cecf62c3-6e9c-44cb-9963-6ac8a95baa14"). InnerVolumeSpecName "kube-api-access-zktk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.923344 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zktk6\" (UniqueName: \"kubernetes.io/projected/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-kube-api-access-zktk6\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:49 crc kubenswrapper[4735]: I0131 15:14:49.923655 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cecf62c3-6e9c-44cb-9963-6ac8a95baa14-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.444888 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" event={"ID":"f746a606-2110-4dc1-8724-5fc004908bec","Type":"ContainerStarted","Data":"2ab77cb4f534ea7f45792b78d57e43de7686689d9e923cbab3b74ed0193d510a"} Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.445001 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.449101 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jq6db" event={"ID":"cecf62c3-6e9c-44cb-9963-6ac8a95baa14","Type":"ContainerDied","Data":"09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a"} Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.449205 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jq6db" Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.449212 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f83495054785d5757e86930ef14b5c383ec8168a89bead0036cad1bd0bac2a" Jan 31 15:14:50 crc kubenswrapper[4735]: I0131 15:14:50.469597 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podStartSLOduration=3.469572764 podStartE2EDuration="3.469572764s" podCreationTimestamp="2026-01-31 15:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:50.461807484 +0000 UTC m=+976.235136536" watchObservedRunningTime="2026-01-31 15:14:50.469572764 +0000 UTC m=+976.242901806" Jan 31 15:14:51 crc kubenswrapper[4735]: I0131 15:14:51.553150 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012685c6-c405-4c79-806a-084aee6d5f70" path="/var/lib/kubelet/pods/012685c6-c405-4c79-806a-084aee6d5f70/volumes" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.479886 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrmlh" event={"ID":"e72d3a05-ea57-4446-96f6-731172bde4a3","Type":"ContainerDied","Data":"d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8"} Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.480150 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d8a8a4dcb5d4246978787eed9e98d051568cbc48ab43763b445822907c33b8" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.481720 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e746-account-create-update-djp42" event={"ID":"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6","Type":"ContainerDied","Data":"b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1"} Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.481759 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9675af06bef0bd8e5ba27e9cc188f24332ac9730d6d8cfb03f346d1e200cda1" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.482967 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b354-account-create-update-dsnth" event={"ID":"f6922b6e-24c3-4a4d-99fd-7027a8b33273","Type":"ContainerDied","Data":"b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5"} Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.482984 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b14ee0d3fddea1b4e93b0dfdfadfd58e71d145814d436acbde09c8f617b9c1f5" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.484082 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d2zgp" event={"ID":"41b16dfd-fefb-49ee-adde-9d244ca8ccbe","Type":"ContainerDied","Data":"31d9970dbfca0e1b36be321baf4e508be4c888b401fa7fac7f85ac14c00426ad"} Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.484097 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d9970dbfca0e1b36be321baf4e508be4c888b401fa7fac7f85ac14c00426ad" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.485386 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9777-account-create-update-wjtv8" event={"ID":"2b376088-4875-44ac-a0b2-2e80ffa08acf","Type":"ContainerDied","Data":"bc8a3882023d24a410db99a2891189500fbcbbf6a1e2c61c2fdcfeb26fb84161"} Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.485408 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc8a3882023d24a410db99a2891189500fbcbbf6a1e2c61c2fdcfeb26fb84161" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.490121 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.496676 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.504952 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.519002 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.523319 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.597944 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts\") pod \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.597986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts\") pod \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598008 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk79h\" (UniqueName: \"kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h\") pod \"e72d3a05-ea57-4446-96f6-731172bde4a3\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598082 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fhcb\" (UniqueName: \"kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb\") pod \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598159 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts\") pod \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\" (UID: \"41b16dfd-fefb-49ee-adde-9d244ca8ccbe\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v54fp\" (UniqueName: \"kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp\") pod \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\" (UID: \"d0bc38a4-6212-4fa2-afb1-e8da3e3271a6\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598207 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8vb4\" (UniqueName: \"kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4\") pod \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\" (UID: \"f6922b6e-24c3-4a4d-99fd-7027a8b33273\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598238 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts\") pod \"e72d3a05-ea57-4446-96f6-731172bde4a3\" (UID: \"e72d3a05-ea57-4446-96f6-731172bde4a3\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598272 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts\") pod \"2b376088-4875-44ac-a0b2-2e80ffa08acf\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598302 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr86x\" (UniqueName: \"kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x\") pod \"2b376088-4875-44ac-a0b2-2e80ffa08acf\" (UID: \"2b376088-4875-44ac-a0b2-2e80ffa08acf\") " Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598878 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41b16dfd-fefb-49ee-adde-9d244ca8ccbe" (UID: "41b16dfd-fefb-49ee-adde-9d244ca8ccbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.598892 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6922b6e-24c3-4a4d-99fd-7027a8b33273" (UID: "f6922b6e-24c3-4a4d-99fd-7027a8b33273"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.599481 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" (UID: "d0bc38a4-6212-4fa2-afb1-e8da3e3271a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.599664 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b376088-4875-44ac-a0b2-2e80ffa08acf" (UID: "2b376088-4875-44ac-a0b2-2e80ffa08acf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.599835 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e72d3a05-ea57-4446-96f6-731172bde4a3" (UID: "e72d3a05-ea57-4446-96f6-731172bde4a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.603053 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x" (OuterVolumeSpecName: "kube-api-access-vr86x") pod "2b376088-4875-44ac-a0b2-2e80ffa08acf" (UID: "2b376088-4875-44ac-a0b2-2e80ffa08acf"). InnerVolumeSpecName "kube-api-access-vr86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.603094 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4" (OuterVolumeSpecName: "kube-api-access-v8vb4") pod "f6922b6e-24c3-4a4d-99fd-7027a8b33273" (UID: "f6922b6e-24c3-4a4d-99fd-7027a8b33273"). InnerVolumeSpecName "kube-api-access-v8vb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.603637 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb" (OuterVolumeSpecName: "kube-api-access-8fhcb") pod "41b16dfd-fefb-49ee-adde-9d244ca8ccbe" (UID: "41b16dfd-fefb-49ee-adde-9d244ca8ccbe"). InnerVolumeSpecName "kube-api-access-8fhcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.603918 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp" (OuterVolumeSpecName: "kube-api-access-v54fp") pod "d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" (UID: "d0bc38a4-6212-4fa2-afb1-e8da3e3271a6"). InnerVolumeSpecName "kube-api-access-v54fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.604775 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h" (OuterVolumeSpecName: "kube-api-access-wk79h") pod "e72d3a05-ea57-4446-96f6-731172bde4a3" (UID: "e72d3a05-ea57-4446-96f6-731172bde4a3"). InnerVolumeSpecName "kube-api-access-wk79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699909 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fhcb\" (UniqueName: \"kubernetes.io/projected/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-kube-api-access-8fhcb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699960 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41b16dfd-fefb-49ee-adde-9d244ca8ccbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699969 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v54fp\" (UniqueName: \"kubernetes.io/projected/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-kube-api-access-v54fp\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699978 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8vb4\" (UniqueName: \"kubernetes.io/projected/f6922b6e-24c3-4a4d-99fd-7027a8b33273-kube-api-access-v8vb4\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699988 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e72d3a05-ea57-4446-96f6-731172bde4a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.699997 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b376088-4875-44ac-a0b2-2e80ffa08acf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.700007 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr86x\" (UniqueName: \"kubernetes.io/projected/2b376088-4875-44ac-a0b2-2e80ffa08acf-kube-api-access-vr86x\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.700015 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6922b6e-24c3-4a4d-99fd-7027a8b33273-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.700023 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:53 crc kubenswrapper[4735]: I0131 15:14:53.700030 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk79h\" (UniqueName: \"kubernetes.io/projected/e72d3a05-ea57-4446-96f6-731172bde4a3-kube-api-access-wk79h\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508240 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d2zgp" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508245 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9777-account-create-update-wjtv8" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508245 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e746-account-create-update-djp42" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508278 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b354-account-create-update-dsnth" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508292 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-79q5z" event={"ID":"893a5a30-2ca2-4d47-9882-2bf19e0233ad","Type":"ContainerStarted","Data":"68746ad976b78121b441a768bc5103b19acad0d7d0016529e011d3e53b0784d4"} Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.508679 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrmlh" Jan 31 15:14:54 crc kubenswrapper[4735]: I0131 15:14:54.536659 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-79q5z" podStartSLOduration=2.693715557 podStartE2EDuration="7.53664278s" podCreationTimestamp="2026-01-31 15:14:47 +0000 UTC" firstStartedPulling="2026-01-31 15:14:48.531120763 +0000 UTC m=+974.304449805" lastFinishedPulling="2026-01-31 15:14:53.374047976 +0000 UTC m=+979.147377028" observedRunningTime="2026-01-31 15:14:54.523756516 +0000 UTC m=+980.297085568" watchObservedRunningTime="2026-01-31 15:14:54.53664278 +0000 UTC m=+980.309971822" Jan 31 15:14:56 crc kubenswrapper[4735]: I0131 15:14:56.529248 4735 generic.go:334] "Generic (PLEG): container finished" podID="893a5a30-2ca2-4d47-9882-2bf19e0233ad" containerID="68746ad976b78121b441a768bc5103b19acad0d7d0016529e011d3e53b0784d4" exitCode=0 Jan 31 15:14:56 crc kubenswrapper[4735]: I0131 15:14:56.529491 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-79q5z" event={"ID":"893a5a30-2ca2-4d47-9882-2bf19e0233ad","Type":"ContainerDied","Data":"68746ad976b78121b441a768bc5103b19acad0d7d0016529e011d3e53b0784d4"} Jan 31 15:14:56 crc kubenswrapper[4735]: I0131 15:14:56.588588 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:14:57 crc kubenswrapper[4735]: I0131 15:14:57.926704 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:57 crc kubenswrapper[4735]: I0131 15:14:57.973903 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle\") pod \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " Jan 31 15:14:57 crc kubenswrapper[4735]: I0131 15:14:57.973960 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data\") pod \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " Jan 31 15:14:57 crc kubenswrapper[4735]: I0131 15:14:57.973981 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bchvv\" (UniqueName: \"kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv\") pod \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\" (UID: \"893a5a30-2ca2-4d47-9882-2bf19e0233ad\") " Jan 31 15:14:57 crc kubenswrapper[4735]: I0131 15:14:57.979683 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv" (OuterVolumeSpecName: "kube-api-access-bchvv") pod "893a5a30-2ca2-4d47-9882-2bf19e0233ad" (UID: "893a5a30-2ca2-4d47-9882-2bf19e0233ad"). InnerVolumeSpecName "kube-api-access-bchvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.000007 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "893a5a30-2ca2-4d47-9882-2bf19e0233ad" (UID: "893a5a30-2ca2-4d47-9882-2bf19e0233ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.015718 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data" (OuterVolumeSpecName: "config-data") pod "893a5a30-2ca2-4d47-9882-2bf19e0233ad" (UID: "893a5a30-2ca2-4d47-9882-2bf19e0233ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.076285 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.076328 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/893a5a30-2ca2-4d47-9882-2bf19e0233ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.076345 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bchvv\" (UniqueName: \"kubernetes.io/projected/893a5a30-2ca2-4d47-9882-2bf19e0233ad-kube-api-access-bchvv\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.239637 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.319764 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.319979 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7vmqt" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="dnsmasq-dns" containerID="cri-o://76a6afb5a810dc257663d7a8b0bece28684555e824d7c7334b3afccba6877d37" gracePeriod=10 Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.555575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-79q5z" event={"ID":"893a5a30-2ca2-4d47-9882-2bf19e0233ad","Type":"ContainerDied","Data":"733b7d0cfac48eb0dec788f674b067cdd00fc8e4cb7f578aba078b8362c3e849"} Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.555812 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733b7d0cfac48eb0dec788f674b067cdd00fc8e4cb7f578aba078b8362c3e849" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.555611 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-79q5z" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.559731 4735 generic.go:334] "Generic (PLEG): container finished" podID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerID="76a6afb5a810dc257663d7a8b0bece28684555e824d7c7334b3afccba6877d37" exitCode=0 Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.559773 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7vmqt" event={"ID":"301f815b-72e0-4b50-8f46-e1b7de77b8fe","Type":"ContainerDied","Data":"76a6afb5a810dc257663d7a8b0bece28684555e824d7c7334b3afccba6877d37"} Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726358 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726762 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="init" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726786 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="init" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726801 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="dnsmasq-dns" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726809 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="dnsmasq-dns" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726822 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72d3a05-ea57-4446-96f6-731172bde4a3" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726831 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72d3a05-ea57-4446-96f6-731172bde4a3" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726850 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6922b6e-24c3-4a4d-99fd-7027a8b33273" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726861 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6922b6e-24c3-4a4d-99fd-7027a8b33273" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726875 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecf62c3-6e9c-44cb-9963-6ac8a95baa14" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726882 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecf62c3-6e9c-44cb-9963-6ac8a95baa14" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726894 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b376088-4875-44ac-a0b2-2e80ffa08acf" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726902 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b376088-4875-44ac-a0b2-2e80ffa08acf" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726916 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b16dfd-fefb-49ee-adde-9d244ca8ccbe" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726923 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b16dfd-fefb-49ee-adde-9d244ca8ccbe" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726938 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726946 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: E0131 15:14:58.726959 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893a5a30-2ca2-4d47-9882-2bf19e0233ad" containerName="keystone-db-sync" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.726966 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="893a5a30-2ca2-4d47-9882-2bf19e0233ad" containerName="keystone-db-sync" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727146 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecf62c3-6e9c-44cb-9963-6ac8a95baa14" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727164 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72d3a05-ea57-4446-96f6-731172bde4a3" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727177 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b376088-4875-44ac-a0b2-2e80ffa08acf" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727194 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727204 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="012685c6-c405-4c79-806a-084aee6d5f70" containerName="dnsmasq-dns" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727217 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b16dfd-fefb-49ee-adde-9d244ca8ccbe" containerName="mariadb-database-create" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727229 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="893a5a30-2ca2-4d47-9882-2bf19e0233ad" containerName="keystone-db-sync" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.727237 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6922b6e-24c3-4a4d-99fd-7027a8b33273" containerName="mariadb-account-create-update" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.728260 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.741198 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.758375 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rhpwz"] Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.759689 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.765960 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pfppv" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.766576 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.766745 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.773783 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.774098 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802542 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802615 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802653 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802671 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwj7c\" (UniqueName: \"kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802689 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802721 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802750 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802791 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802808 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802825 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802845 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.802869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.811145 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rhpwz"] Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.811407 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.914603 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc\") pod \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.914674 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb\") pod \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.914751 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config\") pod \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.914768 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb\") pod \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.914885 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdq8q\" (UniqueName: \"kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q\") pod \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\" (UID: \"301f815b-72e0-4b50-8f46-e1b7de77b8fe\") " Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915073 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915128 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915182 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwj7c\" (UniqueName: \"kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915203 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915269 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915302 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915320 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915360 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.915378 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.916171 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.916618 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.917281 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.917766 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.918259 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.927037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.931069 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.936115 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.936776 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q" (OuterVolumeSpecName: "kube-api-access-jdq8q") pod "301f815b-72e0-4b50-8f46-e1b7de77b8fe" (UID: "301f815b-72e0-4b50-8f46-e1b7de77b8fe"). InnerVolumeSpecName "kube-api-access-jdq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.941698 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.949103 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.990455 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk\") pod \"dnsmasq-dns-847c4cc679-28fjr\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:58 crc kubenswrapper[4735]: I0131 15:14:58.996313 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwj7c\" (UniqueName: \"kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c\") pod \"keystone-bootstrap-rhpwz\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.011108 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:14:59 crc kubenswrapper[4735]: E0131 15:14:59.011503 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="dnsmasq-dns" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.011515 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="dnsmasq-dns" Jan 31 15:14:59 crc kubenswrapper[4735]: E0131 15:14:59.011523 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="init" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.011532 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="init" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.011681 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" containerName="dnsmasq-dns" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.012578 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.017837 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.020779 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.020900 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.020940 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-999jt" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.025545 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.028825 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdq8q\" (UniqueName: \"kubernetes.io/projected/301f815b-72e0-4b50-8f46-e1b7de77b8fe-kube-api-access-jdq8q\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.042851 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-xrbt2"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.043873 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.058729 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nqdgj" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.059071 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.059452 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.092264 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "301f815b-72e0-4b50-8f46-e1b7de77b8fe" (UID: "301f815b-72e0-4b50-8f46-e1b7de77b8fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.092838 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.108163 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xrbt2"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.121941 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "301f815b-72e0-4b50-8f46-e1b7de77b8fe" (UID: "301f815b-72e0-4b50-8f46-e1b7de77b8fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.124014 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "301f815b-72e0-4b50-8f46-e1b7de77b8fe" (UID: "301f815b-72e0-4b50-8f46-e1b7de77b8fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131141 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swvgj\" (UniqueName: \"kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131239 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9xjf\" (UniqueName: \"kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131343 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131471 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131493 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131538 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131742 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.131985 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.132021 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.132034 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.141765 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config" (OuterVolumeSpecName: "config") pod "301f815b-72e0-4b50-8f46-e1b7de77b8fe" (UID: "301f815b-72e0-4b50-8f46-e1b7de77b8fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.142811 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.155500 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tt9w8"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.156789 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.164512 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tt9w8"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.178713 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.181105 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l628k" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.201822 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.203554 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.224363 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.224651 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.225857 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239183 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8d44\" (UniqueName: \"kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239240 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9xjf\" (UniqueName: \"kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239260 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239278 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239294 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239322 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239342 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239360 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.239406 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255463 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255518 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255545 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255567 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255582 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255607 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2xp\" (UniqueName: \"kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255725 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.255776 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swvgj\" (UniqueName: \"kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.256913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.257122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.257978 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/301f815b-72e0-4b50-8f46-e1b7de77b8fe-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.265940 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.266531 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.270171 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.277099 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.298996 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.324077 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swvgj\" (UniqueName: \"kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj\") pod \"horizon-6846b66477-2c8n2\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.331268 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9xjf\" (UniqueName: \"kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf\") pod \"neutron-db-sync-xrbt2\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.332884 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.334063 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.369814 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.369892 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.369923 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.369983 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370011 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370045 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370087 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370105 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370120 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370139 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2xp\" (UniqueName: \"kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370179 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370227 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370300 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8d44\" (UniqueName: \"kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.370622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.376627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.377013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.377123 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.378940 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.381369 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.382254 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.386489 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.390628 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.392396 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.395846 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.398367 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.408219 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bhf6c"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.409612 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.426611 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8d44\" (UniqueName: \"kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44\") pod \"ceilometer-0\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.429226 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l2hgl" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.430666 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.446252 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.452179 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473001 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473114 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473144 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473186 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7dj\" (UniqueName: \"kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473209 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473243 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473270 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pkqr\" (UniqueName: \"kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.473296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.496528 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bhf6c"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.508289 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2xp\" (UniqueName: \"kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp\") pod \"cinder-db-sync-tt9w8\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.570142 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.613345 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.613578 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tgk2f"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.626739 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.626892 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.626922 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.626952 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7dj\" (UniqueName: \"kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.626970 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.627002 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.627026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pkqr\" (UniqueName: \"kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.627270 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.629051 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.641474 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.642412 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.644768 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgk2f"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.644853 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.652206 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-86jhn" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.652281 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.652564 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.675293 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.676768 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.685740 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7vmqt" event={"ID":"301f815b-72e0-4b50-8f46-e1b7de77b8fe","Type":"ContainerDied","Data":"d07d324556573b3112afee290e2b5e9467a88ee5733aa47b9db3f212c3300480"} Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.685803 4735 scope.go:117] "RemoveContainer" containerID="76a6afb5a810dc257663d7a8b0bece28684555e824d7c7334b3afccba6877d37" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.686016 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7vmqt" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.688048 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.688048 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.688585 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qwr7x" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.688624 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.698558 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pkqr\" (UniqueName: \"kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.698965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.699358 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.700108 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key\") pod \"horizon-84f78fc85-vtd9t\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.703485 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7dj\" (UniqueName: \"kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj\") pod \"barbican-db-sync-bhf6c\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.722039 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.723860 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.730070 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.740681 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.742398 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.742694 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dz66\" (UniqueName: \"kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.742721 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.742739 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.742812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.753137 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.754918 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.757386 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.757780 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.761357 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.765570 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.805035 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.823145 4735 scope.go:117] "RemoveContainer" containerID="b42c22abca7dd71a03a01cd5ddd3271046e87877be669e305489e9671259099c" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849755 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849795 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849843 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849862 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849902 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849916 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849939 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt78n\" (UniqueName: \"kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849959 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.849991 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850008 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9n4z\" (UniqueName: \"kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850055 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850076 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850105 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850144 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850197 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850243 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxpw\" (UniqueName: \"kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850351 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850471 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850519 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850536 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850592 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850612 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dz66\" (UniqueName: \"kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850630 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850666 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850686 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850736 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.850909 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.865259 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.867531 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.871931 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.904748 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dz66\" (UniqueName: \"kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66\") pod \"placement-db-sync-tgk2f\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " pod="openstack/placement-db-sync-tgk2f" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961280 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961332 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961361 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961376 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961404 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961430 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961455 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt78n\" (UniqueName: \"kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961504 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9n4z\" (UniqueName: \"kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961519 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961543 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961557 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961577 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961635 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxpw\" (UniqueName: \"kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961654 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961697 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961738 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961753 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.961804 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.962095 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.962135 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.962879 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.963544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.963921 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.964528 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.965010 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.965341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.965577 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.966042 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.974411 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.980253 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.980641 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.983068 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:14:59 crc kubenswrapper[4735]: I0131 15:14:59.986644 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.000195 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.000680 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.001364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.002219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt78n\" (UniqueName: \"kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n\") pod \"dnsmasq-dns-785d8bcb8c-cvvlz\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.002372 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.016079 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxpw\" (UniqueName: \"kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.020039 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9n4z\" (UniqueName: \"kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.025393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.030987 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.052178 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.119468 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.132937 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgk2f" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.140970 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.165998 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.192176 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.236773 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.239069 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.242182 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.242283 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.254819 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7vmqt"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.264261 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.273166 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.378090 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.378174 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.378232 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5hl\" (UniqueName: \"kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.479777 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.479846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.479884 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5hl\" (UniqueName: \"kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.481120 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.499804 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rhpwz"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.501297 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5hl\" (UniqueName: \"kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.502101 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume\") pod \"collect-profiles-29497875-ffmvv\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.514249 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:15:00 crc kubenswrapper[4735]: W0131 15:15:00.569785 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0df3e527_b215_4bf6_b5d0_524d860670fc.slice/crio-ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b WatchSource:0}: Error finding container ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b: Status 404 returned error can't find the container with id ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.592879 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.701757 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6846b66477-2c8n2" event={"ID":"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7","Type":"ContainerStarted","Data":"09056a0f2e5c5bf3fe74575408abe726da258cdd073a6626fae4795a2ccf5638"} Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.703640 4735 generic.go:334] "Generic (PLEG): container finished" podID="af6fad05-2eaf-468f-9133-5c38df5f9517" containerID="d8020e539e067347d7513d33aef0047f4ce0c8cdc6d05c5b93938286a4ec03ec" exitCode=0 Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.703844 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" event={"ID":"af6fad05-2eaf-468f-9133-5c38df5f9517","Type":"ContainerDied","Data":"d8020e539e067347d7513d33aef0047f4ce0c8cdc6d05c5b93938286a4ec03ec"} Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.703863 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" event={"ID":"af6fad05-2eaf-468f-9133-5c38df5f9517","Type":"ContainerStarted","Data":"98d2b6d2b0b38789969e000df19502769781782807f4b14a85d7fbb916bca7d1"} Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.712993 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhpwz" event={"ID":"0df3e527-b215-4bf6-b5d0-524d860670fc","Type":"ContainerStarted","Data":"ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b"} Jan 31 15:15:00 crc kubenswrapper[4735]: W0131 15:15:00.967716 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972df6a4_e6ad_41de_9573_b80779a22bd3.slice/crio-a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752 WatchSource:0}: Error finding container a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752: Status 404 returned error can't find the container with id a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752 Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.976174 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tt9w8"] Jan 31 15:15:00 crc kubenswrapper[4735]: W0131 15:15:00.977353 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b5bc7d4_ba0a_4ed0_990a_44186c837298.slice/crio-f2d52f77d5e893c14b8e4474dcfdfdbb88969330afacf1854bf7b78c27520a93 WatchSource:0}: Error finding container f2d52f77d5e893c14b8e4474dcfdfdbb88969330afacf1854bf7b78c27520a93: Status 404 returned error can't find the container with id f2d52f77d5e893c14b8e4474dcfdfdbb88969330afacf1854bf7b78c27520a93 Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.985796 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.993016 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bhf6c"] Jan 31 15:15:00 crc kubenswrapper[4735]: I0131 15:15:00.999233 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xrbt2"] Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.066657 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgk2f"] Jan 31 15:15:01 crc kubenswrapper[4735]: W0131 15:15:01.074714 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2855e3d7_1280_4652_8188_5a36fa3c992b.slice/crio-f10f378fc350233fb052c7c66bc64d05666180beb480af63745e543c59290f70 WatchSource:0}: Error finding container f10f378fc350233fb052c7c66bc64d05666180beb480af63745e543c59290f70: Status 404 returned error can't find the container with id f10f378fc350233fb052c7c66bc64d05666180beb480af63745e543c59290f70 Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.079026 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.382105 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.429484 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:15:01 crc kubenswrapper[4735]: W0131 15:15:01.444033 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818e7c00_8672_44f1_8d47_4a2c2c7d6a3c.slice/crio-04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338 WatchSource:0}: Error finding container 04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338: Status 404 returned error can't find the container with id 04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338 Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.458168 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv"] Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.506849 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511018 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511064 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511098 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511188 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511222 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.511360 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb\") pod \"af6fad05-2eaf-468f-9133-5c38df5f9517\" (UID: \"af6fad05-2eaf-468f-9133-5c38df5f9517\") " Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.522397 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk" (OuterVolumeSpecName: "kube-api-access-t8dpk") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "kube-api-access-t8dpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.541492 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: W0131 15:15:01.543067 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32030c60_5a22_459e_b84f_a28dac3337b0.slice/crio-cf2094207c97809ab546ed0b026d9555b24584a7d3c282c58c64acf2d8f4097d WatchSource:0}: Error finding container cf2094207c97809ab546ed0b026d9555b24584a7d3c282c58c64acf2d8f4097d: Status 404 returned error can't find the container with id cf2094207c97809ab546ed0b026d9555b24584a7d3c282c58c64acf2d8f4097d Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.544934 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.553753 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config" (OuterVolumeSpecName: "config") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.555994 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301f815b-72e0-4b50-8f46-e1b7de77b8fe" path="/var/lib/kubelet/pods/301f815b-72e0-4b50-8f46-e1b7de77b8fe/volumes" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.571329 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.574603 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af6fad05-2eaf-468f-9133-5c38df5f9517" (UID: "af6fad05-2eaf-468f-9133-5c38df5f9517"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613260 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613288 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613299 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613308 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613317 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af6fad05-2eaf-468f-9133-5c38df5f9517-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.613324 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8dpk\" (UniqueName: \"kubernetes.io/projected/af6fad05-2eaf-468f-9133-5c38df5f9517-kube-api-access-t8dpk\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.758644 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" event={"ID":"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c","Type":"ContainerStarted","Data":"75eb9e7e00948789af2a28ea962431944a770f2668aa64da3a4b0fd0bd1ec88c"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.758708 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" event={"ID":"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c","Type":"ContainerStarted","Data":"04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.762827 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84f78fc85-vtd9t" event={"ID":"2855e3d7-1280-4652-8188-5a36fa3c992b","Type":"ContainerStarted","Data":"f10f378fc350233fb052c7c66bc64d05666180beb480af63745e543c59290f70"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.766697 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhpwz" event={"ID":"0df3e527-b215-4bf6-b5d0-524d860670fc","Type":"ContainerStarted","Data":"972e531c10b2ff3ca68dc07df81cbca1f749b88e488aafb01f799a96591be545"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.773224 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgk2f" event={"ID":"d7152602-a51d-4f77-894f-6514ac5816b7","Type":"ContainerStarted","Data":"e4a91927242c5a54a8e1b3169f473b3997081a4e166d1e8629658b7ef969899a"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.798641 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerStarted","Data":"f2d52f77d5e893c14b8e4474dcfdfdbb88969330afacf1854bf7b78c27520a93"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.802963 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" podStartSLOduration=1.8029084960000001 podStartE2EDuration="1.802908496s" podCreationTimestamp="2026-01-31 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:01.775142231 +0000 UTC m=+987.548471273" watchObservedRunningTime="2026-01-31 15:15:01.802908496 +0000 UTC m=+987.576237548" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.809408 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xrbt2" event={"ID":"2a5fd5ef-5566-4f7c-8e51-ed296536a540","Type":"ContainerStarted","Data":"31748d0a3288e5b8c7638f7250f52d80646f674cdc5d2f03b5998bcff84972c1"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.809549 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xrbt2" event={"ID":"2a5fd5ef-5566-4f7c-8e51-ed296536a540","Type":"ContainerStarted","Data":"2c0cb30d13d20437e4077e1c08b9e08b28d15d4e30c2775baf8a43f7a91c61f2"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.816893 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerStarted","Data":"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.816957 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerStarted","Data":"58e60cc917e176686458cf73fd3f138433eb8ee743250f70e62ce9e6e251c1a8"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.824986 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rhpwz" podStartSLOduration=3.8249598689999997 podStartE2EDuration="3.824959869s" podCreationTimestamp="2026-01-31 15:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:01.797383069 +0000 UTC m=+987.570712121" watchObservedRunningTime="2026-01-31 15:15:01.824959869 +0000 UTC m=+987.598288911" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.833878 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" event={"ID":"af6fad05-2eaf-468f-9133-5c38df5f9517","Type":"ContainerDied","Data":"98d2b6d2b0b38789969e000df19502769781782807f4b14a85d7fbb916bca7d1"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.834212 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-28fjr" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.848415 4735 scope.go:117] "RemoveContainer" containerID="d8020e539e067347d7513d33aef0047f4ce0c8cdc6d05c5b93938286a4ec03ec" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.848157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt9w8" event={"ID":"972df6a4-e6ad-41de-9573-b80779a22bd3","Type":"ContainerStarted","Data":"a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.865649 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-xrbt2" podStartSLOduration=3.865628868 podStartE2EDuration="3.865628868s" podCreationTimestamp="2026-01-31 15:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:01.848898525 +0000 UTC m=+987.622227587" watchObservedRunningTime="2026-01-31 15:15:01.865628868 +0000 UTC m=+987.638957940" Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.875300 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bhf6c" event={"ID":"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98","Type":"ContainerStarted","Data":"59557337d60d8c3bee0ae901c5e34728389d897424f8ddf1b60c47f4f5d4abe5"} Jan 31 15:15:01 crc kubenswrapper[4735]: I0131 15:15:01.891779 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerStarted","Data":"cf2094207c97809ab546ed0b026d9555b24584a7d3c282c58c64acf2d8f4097d"} Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.232483 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.237380 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-28fjr"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.390681 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:02 crc kubenswrapper[4735]: W0131 15:15:02.431859 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eab194d_dd52_47d7_b9dc_7fa4d97ffa74.slice/crio-6e3a0ef9f47dcdf896b3aa9bf9c8c4c7428ad70f0d7725b065ce77566fa04f2c WatchSource:0}: Error finding container 6e3a0ef9f47dcdf896b3aa9bf9c8c4c7428ad70f0d7725b065ce77566fa04f2c: Status 404 returned error can't find the container with id 6e3a0ef9f47dcdf896b3aa9bf9c8c4c7428ad70f0d7725b065ce77566fa04f2c Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.571681 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.585498 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.605501 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.617501 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:02 crc kubenswrapper[4735]: E0131 15:15:02.617913 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6fad05-2eaf-468f-9133-5c38df5f9517" containerName="init" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.617931 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6fad05-2eaf-468f-9133-5c38df5f9517" containerName="init" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.618118 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6fad05-2eaf-468f-9133-5c38df5f9517" containerName="init" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.619013 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.638011 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.724535 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.741635 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.741730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nv6x\" (UniqueName: \"kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.741785 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.741865 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.741907 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.845341 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.845403 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.845488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.845542 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nv6x\" (UniqueName: \"kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.845581 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.846259 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.846690 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.846941 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.860249 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.876783 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nv6x\" (UniqueName: \"kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x\") pod \"horizon-8546f66f5c-h27dg\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.915039 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerStarted","Data":"6e3a0ef9f47dcdf896b3aa9bf9c8c4c7428ad70f0d7725b065ce77566fa04f2c"} Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.919799 4735 generic.go:334] "Generic (PLEG): container finished" podID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerID="dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6" exitCode=0 Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.919932 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerDied","Data":"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6"} Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.919964 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerStarted","Data":"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279"} Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.920242 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.934398 4735 generic.go:334] "Generic (PLEG): container finished" podID="818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" containerID="75eb9e7e00948789af2a28ea962431944a770f2668aa64da3a4b0fd0bd1ec88c" exitCode=0 Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.935065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" event={"ID":"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c","Type":"ContainerDied","Data":"75eb9e7e00948789af2a28ea962431944a770f2668aa64da3a4b0fd0bd1ec88c"} Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.940888 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" podStartSLOduration=3.940871135 podStartE2EDuration="3.940871135s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:02.939941899 +0000 UTC m=+988.713270961" watchObservedRunningTime="2026-01-31 15:15:02.940871135 +0000 UTC m=+988.714200177" Jan 31 15:15:02 crc kubenswrapper[4735]: I0131 15:15:02.957300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.492812 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.556505 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6fad05-2eaf-468f-9133-5c38df5f9517" path="/var/lib/kubelet/pods/af6fad05-2eaf-468f-9133-5c38df5f9517/volumes" Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.962958 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerStarted","Data":"7ef39cc089929eadcd99043d33e28e1f802cca6a429e0e840d88ca794bf01cb0"} Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.963210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerStarted","Data":"b3669922b424f7305f46042341cf7e1ba8c8937bf4a1f11308f0be192e912848"} Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.963318 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-log" containerID="cri-o://b3669922b424f7305f46042341cf7e1ba8c8937bf4a1f11308f0be192e912848" gracePeriod=30 Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.963684 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-httpd" containerID="cri-o://7ef39cc089929eadcd99043d33e28e1f802cca6a429e0e840d88ca794bf01cb0" gracePeriod=30 Jan 31 15:15:03 crc kubenswrapper[4735]: I0131 15:15:03.991593 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerStarted","Data":"246e83d652a28ab033779bfad5edcc429febd40b3dc7f2c5f50513388cdb2cb1"} Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.018891 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.01887425 podStartE2EDuration="5.01887425s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:04.017061089 +0000 UTC m=+989.790390151" watchObservedRunningTime="2026-01-31 15:15:04.01887425 +0000 UTC m=+989.792203292" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.028093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8546f66f5c-h27dg" event={"ID":"c63d6777-cad9-45c3-b5ea-795ecf384616","Type":"ContainerStarted","Data":"af9842fc1a5a8796d22fbe186f32e8a4b16785b01e6f5b9945f7cc5a738b2e88"} Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.592825 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.706686 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume\") pod \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.707030 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5hl\" (UniqueName: \"kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl\") pod \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.707126 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume\") pod \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\" (UID: \"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c\") " Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.709995 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" (UID: "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.724874 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl" (OuterVolumeSpecName: "kube-api-access-vh5hl") pod "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" (UID: "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c"). InnerVolumeSpecName "kube-api-access-vh5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.726031 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" (UID: "818e7c00-8672-44f1-8d47-4a2c2c7d6a3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.811086 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5hl\" (UniqueName: \"kubernetes.io/projected/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-kube-api-access-vh5hl\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.811116 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4735]: I0131 15:15:04.811126 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.043927 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.044505 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv" event={"ID":"818e7c00-8672-44f1-8d47-4a2c2c7d6a3c","Type":"ContainerDied","Data":"04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338"} Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.044536 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d5973a6497ef549f1a27c59ca3ffe3723681ef7272ff4409eb2a7666129338" Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.051626 4735 generic.go:334] "Generic (PLEG): container finished" podID="32030c60-5a22-459e-b84f-a28dac3337b0" containerID="7ef39cc089929eadcd99043d33e28e1f802cca6a429e0e840d88ca794bf01cb0" exitCode=143 Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.051680 4735 generic.go:334] "Generic (PLEG): container finished" podID="32030c60-5a22-459e-b84f-a28dac3337b0" containerID="b3669922b424f7305f46042341cf7e1ba8c8937bf4a1f11308f0be192e912848" exitCode=143 Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.051700 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerDied","Data":"7ef39cc089929eadcd99043d33e28e1f802cca6a429e0e840d88ca794bf01cb0"} Jan 31 15:15:05 crc kubenswrapper[4735]: I0131 15:15:05.051748 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerDied","Data":"b3669922b424f7305f46042341cf7e1ba8c8937bf4a1f11308f0be192e912848"} Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.893635 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.936699 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:15:07 crc kubenswrapper[4735]: E0131 15:15:07.937047 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" containerName="collect-profiles" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.937076 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" containerName="collect-profiles" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.937220 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" containerName="collect-profiles" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.939897 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.942237 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.962893 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.986746 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.986851 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.986888 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4ks\" (UniqueName: \"kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.986934 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.986952 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.987006 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:07 crc kubenswrapper[4735]: I0131 15:15:07.987034 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.021209 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.071212 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-784979f994-vtd4m"] Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.072645 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.085843 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-784979f994-vtd4m"] Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.091470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.095856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.095995 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.096070 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4ks\" (UniqueName: \"kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.096131 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.096149 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.096244 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.097383 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.097790 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.098082 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.103544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.105567 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.111797 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.117659 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4ks\" (UniqueName: \"kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks\") pod \"horizon-7f754986cd-gdb8n\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197808 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-scripts\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197880 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-tls-certs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197912 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c022909b-46cd-4e9d-851e-483e23358bd8-logs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197927 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-config-data\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197949 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-combined-ca-bundle\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.197988 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-secret-key\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.198013 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248ql\" (UniqueName: \"kubernetes.io/projected/c022909b-46cd-4e9d-851e-483e23358bd8-kube-api-access-248ql\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.267674 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299132 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-tls-certs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299445 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c022909b-46cd-4e9d-851e-483e23358bd8-logs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299467 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-config-data\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299493 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-combined-ca-bundle\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299511 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-secret-key\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299531 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248ql\" (UniqueName: \"kubernetes.io/projected/c022909b-46cd-4e9d-851e-483e23358bd8-kube-api-access-248ql\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.299608 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-scripts\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.300305 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-scripts\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.303883 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-secret-key\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.304364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-combined-ca-bundle\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.304410 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c022909b-46cd-4e9d-851e-483e23358bd8-logs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.305682 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c022909b-46cd-4e9d-851e-483e23358bd8-config-data\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.305729 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c022909b-46cd-4e9d-851e-483e23358bd8-horizon-tls-certs\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.323633 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248ql\" (UniqueName: \"kubernetes.io/projected/c022909b-46cd-4e9d-851e-483e23358bd8-kube-api-access-248ql\") pod \"horizon-784979f994-vtd4m\" (UID: \"c022909b-46cd-4e9d-851e-483e23358bd8\") " pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:08 crc kubenswrapper[4735]: I0131 15:15:08.401228 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:10.142591 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:10.193918 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:10.194131 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" containerID="cri-o://2ab77cb4f534ea7f45792b78d57e43de7686689d9e923cbab3b74ed0193d510a" gracePeriod=10 Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:12.123079 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerStarted","Data":"f0f398e887e9d80022c6620d24299bc1df560c22a089473b08bb6c92b8962e57"} Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.160718 4735 generic.go:334] "Generic (PLEG): container finished" podID="f746a606-2110-4dc1-8724-5fc004908bec" containerID="2ab77cb4f534ea7f45792b78d57e43de7686689d9e923cbab3b74ed0193d510a" exitCode=0 Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.161228 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-log" containerID="cri-o://246e83d652a28ab033779bfad5edcc429febd40b3dc7f2c5f50513388cdb2cb1" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.161570 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" event={"ID":"f746a606-2110-4dc1-8724-5fc004908bec","Type":"ContainerDied","Data":"2ab77cb4f534ea7f45792b78d57e43de7686689d9e923cbab3b74ed0193d510a"} Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.161844 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-httpd" containerID="cri-o://f0f398e887e9d80022c6620d24299bc1df560c22a089473b08bb6c92b8962e57" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.190252 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.190231413 podStartE2EDuration="14.190231413s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:13.180851298 +0000 UTC m=+998.954180360" watchObservedRunningTime="2026-01-31 15:15:13.190231413 +0000 UTC m=+998.963560455" Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.237628 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.961244 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-784979f994-vtd4m"] Jan 31 15:15:13 crc kubenswrapper[4735]: I0131 15:15:13.971607 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.171903 4735 generic.go:334] "Generic (PLEG): container finished" podID="0df3e527-b215-4bf6-b5d0-524d860670fc" containerID="972e531c10b2ff3ca68dc07df81cbca1f749b88e488aafb01f799a96591be545" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.171988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhpwz" event={"ID":"0df3e527-b215-4bf6-b5d0-524d860670fc","Type":"ContainerDied","Data":"972e531c10b2ff3ca68dc07df81cbca1f749b88e488aafb01f799a96591be545"} Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.175680 4735 generic.go:334] "Generic (PLEG): container finished" podID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerID="f0f398e887e9d80022c6620d24299bc1df560c22a089473b08bb6c92b8962e57" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.175709 4735 generic.go:334] "Generic (PLEG): container finished" podID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerID="246e83d652a28ab033779bfad5edcc429febd40b3dc7f2c5f50513388cdb2cb1" exitCode=143 Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.175730 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerDied","Data":"f0f398e887e9d80022c6620d24299bc1df560c22a089473b08bb6c92b8962e57"} Jan 31 15:15:14 crc kubenswrapper[4735]: I0131 15:15:14.175757 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerDied","Data":"246e83d652a28ab033779bfad5edcc429febd40b3dc7f2c5f50513388cdb2cb1"} Jan 31 15:15:18 crc kubenswrapper[4735]: I0131 15:15:18.237408 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 31 15:15:19 crc kubenswrapper[4735]: E0131 15:15:19.340051 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 15:15:19 crc kubenswrapper[4735]: E0131 15:15:19.340523 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh594h649h5c8hcbh89h5f7hfdhcfh684h699h55bh59ch547h5cdh85h57fh5fch57ch6hbdh7fh546h65ch84h9fhf4hcdh56dh677h79h577q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8d44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3b5bc7d4-ba0a-4ed0-990a-44186c837298): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:23 crc kubenswrapper[4735]: I0131 15:15:23.237664 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 31 15:15:23 crc kubenswrapper[4735]: I0131 15:15:23.238054 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:15:28 crc kubenswrapper[4735]: I0131 15:15:28.237244 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 31 15:15:29 crc kubenswrapper[4735]: E0131 15:15:29.655736 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 31 15:15:29 crc kubenswrapper[4735]: E0131 15:15:29.656147 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tgk2f_openstack(d7152602-a51d-4f77-894f-6514ac5816b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:29 crc kubenswrapper[4735]: E0131 15:15:29.657455 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tgk2f" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" Jan 31 15:15:30 crc kubenswrapper[4735]: I0131 15:15:30.127589 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:15:30 crc kubenswrapper[4735]: I0131 15:15:30.127999 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:15:30 crc kubenswrapper[4735]: I0131 15:15:30.167281 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:30 crc kubenswrapper[4735]: I0131 15:15:30.167338 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:30 crc kubenswrapper[4735]: E0131 15:15:30.311996 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tgk2f" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" Jan 31 15:15:33 crc kubenswrapper[4735]: I0131 15:15:33.237700 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.517797 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.518236 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n544hc8h5dfh66ch6fh568h5ch5f5hfbh5f7h566h5fbhc7h57ch568h676h68ch5dfh5ch557h95h55h64dhfbhc9h647h67dh57dh656hb9h548h76q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nv6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8546f66f5c-h27dg_openstack(c63d6777-cad9-45c3-b5ea-795ecf384616): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.521276 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8546f66f5c-h27dg" podUID="c63d6777-cad9-45c3-b5ea-795ecf384616" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.530206 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.530368 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n96hc8h7h584h5b6h644hbch5dch654h557h85h666h66h595h5cch56h9fh5c7h567h688hd5h64fh5f4h55h578h8dhbchc8hb9hdbhchf7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swvgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6846b66477-2c8n2_openstack(b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.532251 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6846b66477-2c8n2" podUID="b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.557391 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.557575 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644hf6h687h64h6dh85h668h97h56dhf4h668h67fh5bh58hb6hf8h65bh58h676h85h65bhd6hfdh665h68bhb4h99h669h5d9h54dh587h5b6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pkqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-84f78fc85-vtd9t_openstack(2855e3d7-1280-4652-8188-5a36fa3c992b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:34 crc kubenswrapper[4735]: E0131 15:15:34.562284 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-84f78fc85-vtd9t" podUID="2855e3d7-1280-4652-8188-5a36fa3c992b" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.582346 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643147 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643270 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643321 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643365 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643399 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643458 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643522 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.643560 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9n4z\" (UniqueName: \"kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z\") pod \"32030c60-5a22-459e-b84f-a28dac3337b0\" (UID: \"32030c60-5a22-459e-b84f-a28dac3337b0\") " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.644741 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs" (OuterVolumeSpecName: "logs") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.645281 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.649801 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z" (OuterVolumeSpecName: "kube-api-access-l9n4z") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "kube-api-access-l9n4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.650226 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts" (OuterVolumeSpecName: "scripts") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.652567 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.673830 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.687623 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data" (OuterVolumeSpecName: "config-data") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.716673 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "32030c60-5a22-459e-b84f-a28dac3337b0" (UID: "32030c60-5a22-459e-b84f-a28dac3337b0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747116 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747154 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747167 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9n4z\" (UniqueName: \"kubernetes.io/projected/32030c60-5a22-459e-b84f-a28dac3337b0-kube-api-access-l9n4z\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747179 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32030c60-5a22-459e-b84f-a28dac3337b0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747190 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747225 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747237 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.747247 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32030c60-5a22-459e-b84f-a28dac3337b0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.765512 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:15:34 crc kubenswrapper[4735]: I0131 15:15:34.848353 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.354361 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerStarted","Data":"69f1566db94a372fdc33ca61ec6987358f85ef982a06385d45ac368c1e3c91ba"} Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.355776 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784979f994-vtd4m" event={"ID":"c022909b-46cd-4e9d-851e-483e23358bd8","Type":"ContainerStarted","Data":"3833fc71ad01e87a4136856608494d3b708d1582e0a4d6e31407acb37a8bf2fc"} Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.358224 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32030c60-5a22-459e-b84f-a28dac3337b0","Type":"ContainerDied","Data":"cf2094207c97809ab546ed0b026d9555b24584a7d3c282c58c64acf2d8f4097d"} Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.358344 4735 scope.go:117] "RemoveContainer" containerID="7ef39cc089929eadcd99043d33e28e1f802cca6a429e0e840d88ca794bf01cb0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.358462 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.465925 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.466157 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.493766 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:35 crc kubenswrapper[4735]: E0131 15:15:35.495087 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-log" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.495125 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-log" Jan 31 15:15:35 crc kubenswrapper[4735]: E0131 15:15:35.495155 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-httpd" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.495165 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-httpd" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.495414 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-log" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.495444 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" containerName="glance-httpd" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.498162 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.500618 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.502883 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.505530 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560595 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560645 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwx6\" (UniqueName: \"kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560724 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560793 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560831 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.560868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.572406 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32030c60-5a22-459e-b84f-a28dac3337b0" path="/var/lib/kubelet/pods/32030c60-5a22-459e-b84f-a28dac3337b0/volumes" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662571 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662621 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwx6\" (UniqueName: \"kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662645 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662695 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662778 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662796 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662813 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.662855 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.663862 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.663959 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.663869 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.671088 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.673515 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.676733 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.677458 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.684879 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwx6\" (UniqueName: \"kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.732190 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " pod="openstack/glance-default-external-api-0" Jan 31 15:15:35 crc kubenswrapper[4735]: I0131 15:15:35.822861 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.035101 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.035559 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8j2xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tt9w8_openstack(972df6a4-e6ad-41de-9573-b80779a22bd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.036833 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tt9w8" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.066742 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.075726 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172288 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172445 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172524 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172555 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwj7c\" (UniqueName: \"kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172587 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nxpw\" (UniqueName: \"kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172640 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172690 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172722 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172780 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172808 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172833 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172869 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data\") pod \"0df3e527-b215-4bf6-b5d0-524d860670fc\" (UID: \"0df3e527-b215-4bf6-b5d0-524d860670fc\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172915 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.172941 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs\") pod \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\" (UID: \"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.173725 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs" (OuterVolumeSpecName: "logs") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.174394 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.177466 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.177699 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.178217 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts" (OuterVolumeSpecName: "scripts") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.178389 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.180077 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c" (OuterVolumeSpecName: "kube-api-access-gwj7c") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "kube-api-access-gwj7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.180582 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts" (OuterVolumeSpecName: "scripts") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.181380 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.185329 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw" (OuterVolumeSpecName: "kube-api-access-7nxpw") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "kube-api-access-7nxpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.199720 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.207401 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data" (OuterVolumeSpecName: "config-data") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.208859 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0df3e527-b215-4bf6-b5d0-524d860670fc" (UID: "0df3e527-b215-4bf6-b5d0-524d860670fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.238125 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.243178 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data" (OuterVolumeSpecName: "config-data") pod "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" (UID: "8eab194d-dd52-47d7-b9dc-7fa4d97ffa74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275541 4735 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275567 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275576 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275586 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275610 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275618 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275627 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275639 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275647 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275655 4735 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0df3e527-b215-4bf6-b5d0-524d860670fc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275663 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwj7c\" (UniqueName: \"kubernetes.io/projected/0df3e527-b215-4bf6-b5d0-524d860670fc-kube-api-access-gwj7c\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275672 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nxpw\" (UniqueName: \"kubernetes.io/projected/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-kube-api-access-7nxpw\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.275680 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.303442 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.369896 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8eab194d-dd52-47d7-b9dc-7fa4d97ffa74","Type":"ContainerDied","Data":"6e3a0ef9f47dcdf896b3aa9bf9c8c4c7428ad70f0d7725b065ce77566fa04f2c"} Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.369928 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.376772 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.380221 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rhpwz" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.380689 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rhpwz" event={"ID":"0df3e527-b215-4bf6-b5d0-524d860670fc","Type":"ContainerDied","Data":"ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b"} Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.380773 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff63f5b914b32f14d58803ea8ede0100ac66f7d1eb6cc29ba1aba8d2bca41c7b" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.382373 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tt9w8" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.435376 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.443816 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.454754 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.455190 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-httpd" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455204 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-httpd" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.455220 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df3e527-b215-4bf6-b5d0-524d860670fc" containerName="keystone-bootstrap" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455226 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df3e527-b215-4bf6-b5d0-524d860670fc" containerName="keystone-bootstrap" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.455241 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-log" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455249 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-log" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455470 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-httpd" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455483 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" containerName="glance-log" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.455498 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df3e527-b215-4bf6-b5d0-524d860670fc" containerName="keystone-bootstrap" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.456559 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.461645 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.461815 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.474862 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579407 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579507 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579624 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579647 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvxb\" (UniqueName: \"kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579728 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.579763 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.681739 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvxb\" (UniqueName: \"kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.681781 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.681815 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.681878 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.681939 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.682803 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.682861 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.682927 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.682982 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.684527 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.684597 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.688377 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.688660 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.688975 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.690180 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.701207 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvxb\" (UniqueName: \"kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.709579 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.730251 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.730435 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4w7dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-bhf6c_openstack(c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:15:36 crc kubenswrapper[4735]: E0131 15:15:36.731831 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-bhf6c" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.770667 4735 scope.go:117] "RemoveContainer" containerID="b3669922b424f7305f46042341cf7e1ba8c8937bf4a1f11308f0be192e912848" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.775262 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.836222 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.836255 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.843220 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.852463 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887030 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887188 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887215 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887235 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data\") pod \"2855e3d7-1280-4652-8188-5a36fa3c992b\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887292 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data\") pod \"c63d6777-cad9-45c3-b5ea-795ecf384616\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887365 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pkqr\" (UniqueName: \"kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr\") pod \"2855e3d7-1280-4652-8188-5a36fa3c992b\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887394 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887446 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swvgj\" (UniqueName: \"kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj\") pod \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887478 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key\") pod \"2855e3d7-1280-4652-8188-5a36fa3c992b\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887502 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68lqj\" (UniqueName: \"kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887521 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0\") pod \"f746a606-2110-4dc1-8724-5fc004908bec\" (UID: \"f746a606-2110-4dc1-8724-5fc004908bec\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887552 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key\") pod \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887578 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs\") pod \"2855e3d7-1280-4652-8188-5a36fa3c992b\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887623 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts\") pod \"c63d6777-cad9-45c3-b5ea-795ecf384616\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887654 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts\") pod \"2855e3d7-1280-4652-8188-5a36fa3c992b\" (UID: \"2855e3d7-1280-4652-8188-5a36fa3c992b\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887673 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs\") pod \"c63d6777-cad9-45c3-b5ea-795ecf384616\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887693 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data\") pod \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887716 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nv6x\" (UniqueName: \"kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x\") pod \"c63d6777-cad9-45c3-b5ea-795ecf384616\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887767 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs\") pod \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts\") pod \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\" (UID: \"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.887824 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key\") pod \"c63d6777-cad9-45c3-b5ea-795ecf384616\" (UID: \"c63d6777-cad9-45c3-b5ea-795ecf384616\") " Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.890688 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs" (OuterVolumeSpecName: "logs") pod "2855e3d7-1280-4652-8188-5a36fa3c992b" (UID: "2855e3d7-1280-4652-8188-5a36fa3c992b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.890796 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data" (OuterVolumeSpecName: "config-data") pod "2855e3d7-1280-4652-8188-5a36fa3c992b" (UID: "2855e3d7-1280-4652-8188-5a36fa3c992b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.892287 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr" (OuterVolumeSpecName: "kube-api-access-4pkqr") pod "2855e3d7-1280-4652-8188-5a36fa3c992b" (UID: "2855e3d7-1280-4652-8188-5a36fa3c992b"). InnerVolumeSpecName "kube-api-access-4pkqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.893535 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data" (OuterVolumeSpecName: "config-data") pod "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" (UID: "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.893177 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs" (OuterVolumeSpecName: "logs") pod "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" (UID: "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.894069 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data" (OuterVolumeSpecName: "config-data") pod "c63d6777-cad9-45c3-b5ea-795ecf384616" (UID: "c63d6777-cad9-45c3-b5ea-795ecf384616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.894112 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts" (OuterVolumeSpecName: "scripts") pod "c63d6777-cad9-45c3-b5ea-795ecf384616" (UID: "c63d6777-cad9-45c3-b5ea-795ecf384616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.894438 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs" (OuterVolumeSpecName: "logs") pod "c63d6777-cad9-45c3-b5ea-795ecf384616" (UID: "c63d6777-cad9-45c3-b5ea-795ecf384616"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.894581 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts" (OuterVolumeSpecName: "scripts") pod "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" (UID: "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.894910 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts" (OuterVolumeSpecName: "scripts") pod "2855e3d7-1280-4652-8188-5a36fa3c992b" (UID: "2855e3d7-1280-4652-8188-5a36fa3c992b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.900434 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c63d6777-cad9-45c3-b5ea-795ecf384616" (UID: "c63d6777-cad9-45c3-b5ea-795ecf384616"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.904588 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj" (OuterVolumeSpecName: "kube-api-access-swvgj") pod "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" (UID: "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7"). InnerVolumeSpecName "kube-api-access-swvgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.906367 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" (UID: "b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.908377 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2855e3d7-1280-4652-8188-5a36fa3c992b" (UID: "2855e3d7-1280-4652-8188-5a36fa3c992b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.916654 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj" (OuterVolumeSpecName: "kube-api-access-68lqj") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "kube-api-access-68lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.916699 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x" (OuterVolumeSpecName: "kube-api-access-7nv6x") pod "c63d6777-cad9-45c3-b5ea-795ecf384616" (UID: "c63d6777-cad9-45c3-b5ea-795ecf384616"). InnerVolumeSpecName "kube-api-access-7nv6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.948505 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.952674 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config" (OuterVolumeSpecName: "config") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.961399 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.983832 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:36 crc kubenswrapper[4735]: I0131 15:15:36.984160 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f746a606-2110-4dc1-8724-5fc004908bec" (UID: "f746a606-2110-4dc1-8724-5fc004908bec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004867 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004921 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2855e3d7-1280-4652-8188-5a36fa3c992b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004936 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004951 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004969 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c63d6777-cad9-45c3-b5ea-795ecf384616-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004983 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.004996 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nv6x\" (UniqueName: \"kubernetes.io/projected/c63d6777-cad9-45c3-b5ea-795ecf384616-kube-api-access-7nv6x\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005011 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005029 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005041 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c63d6777-cad9-45c3-b5ea-795ecf384616-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005053 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005065 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005081 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005096 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2855e3d7-1280-4652-8188-5a36fa3c992b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005107 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c63d6777-cad9-45c3-b5ea-795ecf384616-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005119 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pkqr\" (UniqueName: \"kubernetes.io/projected/2855e3d7-1280-4652-8188-5a36fa3c992b-kube-api-access-4pkqr\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005135 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005146 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swvgj\" (UniqueName: \"kubernetes.io/projected/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7-kube-api-access-swvgj\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005160 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2855e3d7-1280-4652-8188-5a36fa3c992b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005175 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68lqj\" (UniqueName: \"kubernetes.io/projected/f746a606-2110-4dc1-8724-5fc004908bec-kube-api-access-68lqj\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.005189 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f746a606-2110-4dc1-8724-5fc004908bec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.176867 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rhpwz"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.184558 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rhpwz"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.206480 4735 scope.go:117] "RemoveContainer" containerID="f0f398e887e9d80022c6620d24299bc1df560c22a089473b08bb6c92b8962e57" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.250294 4735 scope.go:117] "RemoveContainer" containerID="246e83d652a28ab033779bfad5edcc429febd40b3dc7f2c5f50513388cdb2cb1" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.297998 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rzt6q"] Jan 31 15:15:37 crc kubenswrapper[4735]: E0131 15:15:37.298404 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.298440 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" Jan 31 15:15:37 crc kubenswrapper[4735]: E0131 15:15:37.298464 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="init" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.298473 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="init" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.298915 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f746a606-2110-4dc1-8724-5fc004908bec" containerName="dnsmasq-dns" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.299571 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.301957 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.302053 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.302443 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pfppv" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.305604 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.305830 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.316954 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rzt6q"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.393624 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" event={"ID":"f746a606-2110-4dc1-8724-5fc004908bec","Type":"ContainerDied","Data":"515d9cd818f26866ade682633d9e314f9e25c1fc278ac4cfe553d22ea4d68982"} Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.394026 4735 scope.go:117] "RemoveContainer" containerID="2ab77cb4f534ea7f45792b78d57e43de7686689d9e923cbab3b74ed0193d510a" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.394209 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-g5j87" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.398778 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8546f66f5c-h27dg" event={"ID":"c63d6777-cad9-45c3-b5ea-795ecf384616","Type":"ContainerDied","Data":"af9842fc1a5a8796d22fbe186f32e8a4b16785b01e6f5b9945f7cc5a738b2e88"} Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.398797 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8546f66f5c-h27dg" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.402935 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84f78fc85-vtd9t" event={"ID":"2855e3d7-1280-4652-8188-5a36fa3c992b","Type":"ContainerDied","Data":"f10f378fc350233fb052c7c66bc64d05666180beb480af63745e543c59290f70"} Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.403025 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84f78fc85-vtd9t" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.410414 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.410660 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.410902 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv988\" (UniqueName: \"kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.411047 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.411089 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.411152 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.418317 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6846b66477-2c8n2" event={"ID":"b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7","Type":"ContainerDied","Data":"09056a0f2e5c5bf3fe74575408abe726da258cdd073a6626fae4795a2ccf5638"} Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.418409 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6846b66477-2c8n2" Jan 31 15:15:37 crc kubenswrapper[4735]: E0131 15:15:37.432038 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-bhf6c" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.432327 4735 scope.go:117] "RemoveContainer" containerID="42c72b63d8ea520a4212a7114d0922fea2c3c7f32e69a47cd354fd8886499686" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.484949 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.500787 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-g5j87"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513319 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513348 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513635 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.513702 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv988\" (UniqueName: \"kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.519895 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.522917 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.524435 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.532755 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.533059 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.535307 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84f78fc85-vtd9t"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.532705 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.540138 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv988\" (UniqueName: \"kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988\") pod \"keystone-bootstrap-rzt6q\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.556197 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df3e527-b215-4bf6-b5d0-524d860670fc" path="/var/lib/kubelet/pods/0df3e527-b215-4bf6-b5d0-524d860670fc/volumes" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.559754 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2855e3d7-1280-4652-8188-5a36fa3c992b" path="/var/lib/kubelet/pods/2855e3d7-1280-4652-8188-5a36fa3c992b/volumes" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.560357 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eab194d-dd52-47d7-b9dc-7fa4d97ffa74" path="/var/lib/kubelet/pods/8eab194d-dd52-47d7-b9dc-7fa4d97ffa74/volumes" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.561116 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f746a606-2110-4dc1-8724-5fc004908bec" path="/var/lib/kubelet/pods/f746a606-2110-4dc1-8724-5fc004908bec/volumes" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.567413 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.587193 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6846b66477-2c8n2"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.611094 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.621734 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.623127 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8546f66f5c-h27dg"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.855584 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:15:37 crc kubenswrapper[4735]: I0131 15:15:37.979584 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.099402 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rzt6q"] Jan 31 15:15:38 crc kubenswrapper[4735]: W0131 15:15:38.109211 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce096009_4177_43cf_a0c2_76f2888ebea1.slice/crio-d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986 WatchSource:0}: Error finding container d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986: Status 404 returned error can't find the container with id d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986 Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.429403 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rzt6q" event={"ID":"ce096009-4177-43cf-a0c2-76f2888ebea1","Type":"ContainerStarted","Data":"5460f3b0dbc29ce53229a15e1d9182b9d50ea1b509f665e81cce8a75498bc15d"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.429765 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rzt6q" event={"ID":"ce096009-4177-43cf-a0c2-76f2888ebea1","Type":"ContainerStarted","Data":"d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.433123 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerStarted","Data":"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.438961 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerStarted","Data":"502456fd1de1026a07431dbd3dae3b054005bc33fecf428146696f57607d0db7"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.438987 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerStarted","Data":"c66d6894f5d7b8ef37aaaed37239780e27cdf90c09aed9df2f36f26fa6784c64"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.440733 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerStarted","Data":"227e962ddb3af152008a8aeea8385742d705877ce4a6abc86d91b8ed552b30c8"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.444117 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784979f994-vtd4m" event={"ID":"c022909b-46cd-4e9d-851e-483e23358bd8","Type":"ContainerStarted","Data":"c3aaaf9e3d605844b1739feed32ea7a0f977124f6af7193a1462062ed96db7b7"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.444170 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784979f994-vtd4m" event={"ID":"c022909b-46cd-4e9d-851e-483e23358bd8","Type":"ContainerStarted","Data":"632ee7705c71fdfdabb3cb8b68e39905859f5ddf52f3cc2f612330b7738dd593"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.449086 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerStarted","Data":"78586c1c6ef2809a6630598829906e8c624230af76a454ccda96172273c260cf"} Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.452309 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rzt6q" podStartSLOduration=1.4522916829999999 podStartE2EDuration="1.452291683s" podCreationTimestamp="2026-01-31 15:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:38.447192899 +0000 UTC m=+1024.220521951" watchObservedRunningTime="2026-01-31 15:15:38.452291683 +0000 UTC m=+1024.225620725" Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.475496 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-784979f994-vtd4m" podStartSLOduration=27.706459126 podStartE2EDuration="30.475478729s" podCreationTimestamp="2026-01-31 15:15:08 +0000 UTC" firstStartedPulling="2026-01-31 15:15:34.497362996 +0000 UTC m=+1020.270692038" lastFinishedPulling="2026-01-31 15:15:37.266382599 +0000 UTC m=+1023.039711641" observedRunningTime="2026-01-31 15:15:38.4719914 +0000 UTC m=+1024.245320472" watchObservedRunningTime="2026-01-31 15:15:38.475478729 +0000 UTC m=+1024.248807771" Jan 31 15:15:38 crc kubenswrapper[4735]: I0131 15:15:38.513798 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f754986cd-gdb8n" podStartSLOduration=28.793889627 podStartE2EDuration="31.513769741s" podCreationTimestamp="2026-01-31 15:15:07 +0000 UTC" firstStartedPulling="2026-01-31 15:15:34.495144754 +0000 UTC m=+1020.268473826" lastFinishedPulling="2026-01-31 15:15:37.215024898 +0000 UTC m=+1022.988353940" observedRunningTime="2026-01-31 15:15:38.495132374 +0000 UTC m=+1024.268461436" watchObservedRunningTime="2026-01-31 15:15:38.513769741 +0000 UTC m=+1024.287098793" Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.461916 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerStarted","Data":"9c8ac6ac238b16eeb490fab3f29c77bc494a1e409869a8306478eed4d0d6219e"} Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.462537 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerStarted","Data":"8e3fa09ea9e28525aa336edf5d5754a51831f07f43133acf1a19f456f601fd93"} Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.465792 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerStarted","Data":"39d79d7c8770fa2c36d71289a49a0e80bd191daa2d0ff3babf55af4000024967"} Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.465833 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerStarted","Data":"0ec439ab2f57d0f14631a62bdac7d049b8efdd63f942251ce9f160accff1e3cb"} Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.499088 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.499061326 podStartE2EDuration="3.499061326s" podCreationTimestamp="2026-01-31 15:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:39.481285183 +0000 UTC m=+1025.254614225" watchObservedRunningTime="2026-01-31 15:15:39.499061326 +0000 UTC m=+1025.272390368" Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.505045 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.505022294 podStartE2EDuration="4.505022294s" podCreationTimestamp="2026-01-31 15:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:39.504616183 +0000 UTC m=+1025.277945225" watchObservedRunningTime="2026-01-31 15:15:39.505022294 +0000 UTC m=+1025.278351346" Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.557282 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7" path="/var/lib/kubelet/pods/b8c6c97b-df77-4ba4-bd86-f1e4dfab89e7/volumes" Jan 31 15:15:39 crc kubenswrapper[4735]: I0131 15:15:39.558069 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63d6777-cad9-45c3-b5ea-795ecf384616" path="/var/lib/kubelet/pods/c63d6777-cad9-45c3-b5ea-795ecf384616/volumes" Jan 31 15:15:43 crc kubenswrapper[4735]: I0131 15:15:43.525971 4735 generic.go:334] "Generic (PLEG): container finished" podID="ce096009-4177-43cf-a0c2-76f2888ebea1" containerID="5460f3b0dbc29ce53229a15e1d9182b9d50ea1b509f665e81cce8a75498bc15d" exitCode=0 Jan 31 15:15:43 crc kubenswrapper[4735]: I0131 15:15:43.526041 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rzt6q" event={"ID":"ce096009-4177-43cf-a0c2-76f2888ebea1","Type":"ContainerDied","Data":"5460f3b0dbc29ce53229a15e1d9182b9d50ea1b509f665e81cce8a75498bc15d"} Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.202299 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.368734 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.368857 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.368930 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.368977 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv988\" (UniqueName: \"kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.369039 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.369090 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle\") pod \"ce096009-4177-43cf-a0c2-76f2888ebea1\" (UID: \"ce096009-4177-43cf-a0c2-76f2888ebea1\") " Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.374325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.374887 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988" (OuterVolumeSpecName: "kube-api-access-gv988") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "kube-api-access-gv988". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.375955 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts" (OuterVolumeSpecName: "scripts") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.378024 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.405679 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.420476 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data" (OuterVolumeSpecName: "config-data") pod "ce096009-4177-43cf-a0c2-76f2888ebea1" (UID: "ce096009-4177-43cf-a0c2-76f2888ebea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471328 4735 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471364 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471378 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv988\" (UniqueName: \"kubernetes.io/projected/ce096009-4177-43cf-a0c2-76f2888ebea1-kube-api-access-gv988\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471392 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471403 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.471446 4735 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce096009-4177-43cf-a0c2-76f2888ebea1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.556772 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgk2f" event={"ID":"d7152602-a51d-4f77-894f-6514ac5816b7","Type":"ContainerStarted","Data":"9b64367da740770fbbd72062799ce4fab6ac349cd2622f10d0fd2572037abfa1"} Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.556817 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerStarted","Data":"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50"} Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.556985 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rzt6q" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.557708 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rzt6q" event={"ID":"ce096009-4177-43cf-a0c2-76f2888ebea1","Type":"ContainerDied","Data":"d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986"} Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.557739 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4395d58c0771d930b2caeca1b1fd3da9e5a66eedff864e1f98d91ad1ccc0986" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.581294 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tgk2f" podStartSLOduration=2.616827787 podStartE2EDuration="46.58127307s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="2026-01-31 15:15:01.067622526 +0000 UTC m=+986.840951568" lastFinishedPulling="2026-01-31 15:15:45.032067769 +0000 UTC m=+1030.805396851" observedRunningTime="2026-01-31 15:15:45.57702857 +0000 UTC m=+1031.350357632" watchObservedRunningTime="2026-01-31 15:15:45.58127307 +0000 UTC m=+1031.354602102" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.587710 4735 generic.go:334] "Generic (PLEG): container finished" podID="2a5fd5ef-5566-4f7c-8e51-ed296536a540" containerID="31748d0a3288e5b8c7638f7250f52d80646f674cdc5d2f03b5998bcff84972c1" exitCode=0 Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.587767 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xrbt2" event={"ID":"2a5fd5ef-5566-4f7c-8e51-ed296536a540","Type":"ContainerDied","Data":"31748d0a3288e5b8c7638f7250f52d80646f674cdc5d2f03b5998bcff84972c1"} Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.696274 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b94ccc6d9-2fktc"] Jan 31 15:15:45 crc kubenswrapper[4735]: E0131 15:15:45.696957 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce096009-4177-43cf-a0c2-76f2888ebea1" containerName="keystone-bootstrap" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.697062 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce096009-4177-43cf-a0c2-76f2888ebea1" containerName="keystone-bootstrap" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.697806 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce096009-4177-43cf-a0c2-76f2888ebea1" containerName="keystone-bootstrap" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.698622 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.701285 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.701586 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.701812 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.701850 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pfppv" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.704708 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.705336 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.717476 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b94ccc6d9-2fktc"] Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.823204 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.824800 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.857458 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.872036 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-scripts\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881589 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-public-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-internal-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881789 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27g6\" (UniqueName: \"kubernetes.io/projected/3d196991-4f4f-4bb3-a113-b33659619f09-kube-api-access-z27g6\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881890 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-credential-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.881976 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-combined-ca-bundle\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.882116 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-fernet-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.882194 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-config-data\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.983355 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-fernet-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.983435 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-config-data\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.983483 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-scripts\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.983541 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-public-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.984493 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-internal-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.984665 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27g6\" (UniqueName: \"kubernetes.io/projected/3d196991-4f4f-4bb3-a113-b33659619f09-kube-api-access-z27g6\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.984816 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-credential-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.984895 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-combined-ca-bundle\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.988144 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-scripts\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.990815 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-fernet-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.991933 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-internal-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.992147 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-config-data\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.992168 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-credential-keys\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.993677 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-public-tls-certs\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:45 crc kubenswrapper[4735]: I0131 15:15:45.994037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d196991-4f4f-4bb3-a113-b33659619f09-combined-ca-bundle\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.006447 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27g6\" (UniqueName: \"kubernetes.io/projected/3d196991-4f4f-4bb3-a113-b33659619f09-kube-api-access-z27g6\") pod \"keystone-b94ccc6d9-2fktc\" (UID: \"3d196991-4f4f-4bb3-a113-b33659619f09\") " pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.016379 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.455971 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b94ccc6d9-2fktc"] Jan 31 15:15:46 crc kubenswrapper[4735]: W0131 15:15:46.458103 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d196991_4f4f_4bb3_a113_b33659619f09.slice/crio-fbca094d09667e83596570b58024c149f16d2de6eed1f58b9aedcd4e0dad220d WatchSource:0}: Error finding container fbca094d09667e83596570b58024c149f16d2de6eed1f58b9aedcd4e0dad220d: Status 404 returned error can't find the container with id fbca094d09667e83596570b58024c149f16d2de6eed1f58b9aedcd4e0dad220d Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.598073 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b94ccc6d9-2fktc" event={"ID":"3d196991-4f4f-4bb3-a113-b33659619f09","Type":"ContainerStarted","Data":"fbca094d09667e83596570b58024c149f16d2de6eed1f58b9aedcd4e0dad220d"} Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.601404 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.601457 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.776558 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.776605 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.827260 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.829148 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:46 crc kubenswrapper[4735]: I0131 15:15:46.863337 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.001225 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config\") pod \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.002028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9xjf\" (UniqueName: \"kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf\") pod \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.002244 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle\") pod \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\" (UID: \"2a5fd5ef-5566-4f7c-8e51-ed296536a540\") " Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.016663 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf" (OuterVolumeSpecName: "kube-api-access-f9xjf") pod "2a5fd5ef-5566-4f7c-8e51-ed296536a540" (UID: "2a5fd5ef-5566-4f7c-8e51-ed296536a540"). InnerVolumeSpecName "kube-api-access-f9xjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.034853 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config" (OuterVolumeSpecName: "config") pod "2a5fd5ef-5566-4f7c-8e51-ed296536a540" (UID: "2a5fd5ef-5566-4f7c-8e51-ed296536a540"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.037012 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5fd5ef-5566-4f7c-8e51-ed296536a540" (UID: "2a5fd5ef-5566-4f7c-8e51-ed296536a540"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.104762 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.104974 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9xjf\" (UniqueName: \"kubernetes.io/projected/2a5fd5ef-5566-4f7c-8e51-ed296536a540-kube-api-access-f9xjf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.105105 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5fd5ef-5566-4f7c-8e51-ed296536a540-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.605895 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b94ccc6d9-2fktc" event={"ID":"3d196991-4f4f-4bb3-a113-b33659619f09","Type":"ContainerStarted","Data":"fa85a0b287c85743b9dbfab17eb8e11208db7ac0df1ac8c8f1a78408c0c91ad1"} Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.606142 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.607086 4735 generic.go:334] "Generic (PLEG): container finished" podID="d7152602-a51d-4f77-894f-6514ac5816b7" containerID="9b64367da740770fbbd72062799ce4fab6ac349cd2622f10d0fd2572037abfa1" exitCode=0 Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.607129 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgk2f" event={"ID":"d7152602-a51d-4f77-894f-6514ac5816b7","Type":"ContainerDied","Data":"9b64367da740770fbbd72062799ce4fab6ac349cd2622f10d0fd2572037abfa1"} Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.608471 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xrbt2" event={"ID":"2a5fd5ef-5566-4f7c-8e51-ed296536a540","Type":"ContainerDied","Data":"2c0cb30d13d20437e4077e1c08b9e08b28d15d4e30c2775baf8a43f7a91c61f2"} Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.608535 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0cb30d13d20437e4077e1c08b9e08b28d15d4e30c2775baf8a43f7a91c61f2" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.608487 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xrbt2" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.609298 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.609318 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.648945 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b94ccc6d9-2fktc" podStartSLOduration=2.648924431 podStartE2EDuration="2.648924431s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:47.63649826 +0000 UTC m=+1033.409827332" watchObservedRunningTime="2026-01-31 15:15:47.648924431 +0000 UTC m=+1033.422253503" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.777687 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:15:47 crc kubenswrapper[4735]: E0131 15:15:47.778130 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5fd5ef-5566-4f7c-8e51-ed296536a540" containerName="neutron-db-sync" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.778147 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5fd5ef-5566-4f7c-8e51-ed296536a540" containerName="neutron-db-sync" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.778340 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5fd5ef-5566-4f7c-8e51-ed296536a540" containerName="neutron-db-sync" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.788491 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.801655 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.879800 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.884209 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.889036 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.889125 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nqdgj" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.889036 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.891918 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.897478 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.922852 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.922917 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pjzz\" (UniqueName: \"kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.922976 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.922999 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.923029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:47 crc kubenswrapper[4735]: I0131 15:15:47.923064 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025617 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025663 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025680 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpg9\" (UniqueName: \"kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025746 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025783 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025812 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025853 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025890 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pjzz\" (UniqueName: \"kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025915 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.025933 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.026869 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.027022 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.027236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.030272 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.036026 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.051881 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pjzz\" (UniqueName: \"kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz\") pod \"dnsmasq-dns-55f844cf75-cgvb8\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.110115 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.126956 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.127054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.127078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.127119 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.127140 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhpg9\" (UniqueName: \"kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.134044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.138043 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.147295 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhpg9\" (UniqueName: \"kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.147571 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.168196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs\") pod \"neutron-894686c96-gsnzd\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.248857 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.268572 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.268608 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.272458 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.405876 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.406199 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.411696 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-784979f994-vtd4m" podUID="c022909b-46cd-4e9d-851e-483e23358bd8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.643442 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.643484 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.652551 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.908208 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 15:15:48 crc kubenswrapper[4735]: I0131 15:15:48.912564 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.025203 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.034224 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgk2f" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.170765 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs\") pod \"d7152602-a51d-4f77-894f-6514ac5816b7\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.171033 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data\") pod \"d7152602-a51d-4f77-894f-6514ac5816b7\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.171070 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle\") pod \"d7152602-a51d-4f77-894f-6514ac5816b7\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.171145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts\") pod \"d7152602-a51d-4f77-894f-6514ac5816b7\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.171784 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dz66\" (UniqueName: \"kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66\") pod \"d7152602-a51d-4f77-894f-6514ac5816b7\" (UID: \"d7152602-a51d-4f77-894f-6514ac5816b7\") " Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.171788 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs" (OuterVolumeSpecName: "logs") pod "d7152602-a51d-4f77-894f-6514ac5816b7" (UID: "d7152602-a51d-4f77-894f-6514ac5816b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.172189 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7152602-a51d-4f77-894f-6514ac5816b7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.175190 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66" (OuterVolumeSpecName: "kube-api-access-8dz66") pod "d7152602-a51d-4f77-894f-6514ac5816b7" (UID: "d7152602-a51d-4f77-894f-6514ac5816b7"). InnerVolumeSpecName "kube-api-access-8dz66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.181654 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts" (OuterVolumeSpecName: "scripts") pod "d7152602-a51d-4f77-894f-6514ac5816b7" (UID: "d7152602-a51d-4f77-894f-6514ac5816b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.210868 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7152602-a51d-4f77-894f-6514ac5816b7" (UID: "d7152602-a51d-4f77-894f-6514ac5816b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.211446 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data" (OuterVolumeSpecName: "config-data") pod "d7152602-a51d-4f77-894f-6514ac5816b7" (UID: "d7152602-a51d-4f77-894f-6514ac5816b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.274379 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.274452 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dz66\" (UniqueName: \"kubernetes.io/projected/d7152602-a51d-4f77-894f-6514ac5816b7-kube-api-access-8dz66\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.274469 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.274482 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7152602-a51d-4f77-894f-6514ac5816b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.653667 4735 generic.go:334] "Generic (PLEG): container finished" podID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerID="d367b4b669abc98897eba62da754837c09f7e183d1f0a41fca468ba66dd208b3" exitCode=0 Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.653880 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" event={"ID":"1d1e0f10-c620-4389-ad20-abc3cc647615","Type":"ContainerDied","Data":"d367b4b669abc98897eba62da754837c09f7e183d1f0a41fca468ba66dd208b3"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.653972 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" event={"ID":"1d1e0f10-c620-4389-ad20-abc3cc647615","Type":"ContainerStarted","Data":"1e3003f65006eeedfefe80d636c07a03d78c7442dcf3f434342d8da5ab32c907"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.659544 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerStarted","Data":"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.659582 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerStarted","Data":"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.659592 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerStarted","Data":"626a6753ae2598a75d5b97890ecfca93df415a3595fa09ab34d95162fb8b5dcd"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.659626 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.664038 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgk2f" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.664825 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgk2f" event={"ID":"d7152602-a51d-4f77-894f-6514ac5816b7","Type":"ContainerDied","Data":"e4a91927242c5a54a8e1b3169f473b3997081a4e166d1e8629658b7ef969899a"} Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.664853 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a91927242c5a54a8e1b3169f473b3997081a4e166d1e8629658b7ef969899a" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.695526 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-894686c96-gsnzd" podStartSLOduration=2.6954907759999998 podStartE2EDuration="2.695490776s" podCreationTimestamp="2026-01-31 15:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:49.693654374 +0000 UTC m=+1035.466983416" watchObservedRunningTime="2026-01-31 15:15:49.695490776 +0000 UTC m=+1035.468819818" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.801654 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-766f55bc7b-w8qbt"] Jan 31 15:15:49 crc kubenswrapper[4735]: E0131 15:15:49.802081 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" containerName="placement-db-sync" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.802094 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" containerName="placement-db-sync" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.802262 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" containerName="placement-db-sync" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.805372 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.808373 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.813667 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.813893 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.814043 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.814274 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-86jhn" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.824637 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-766f55bc7b-w8qbt"] Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888052 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-config-data\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888136 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-scripts\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888204 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs2fx\" (UniqueName: \"kubernetes.io/projected/a0e33520-34a2-4009-9f61-7b6211fa8744-kube-api-access-cs2fx\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888222 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-internal-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888239 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-public-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888267 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e33520-34a2-4009-9f61-7b6211fa8744-logs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.888283 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-combined-ca-bundle\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.989383 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e33520-34a2-4009-9f61-7b6211fa8744-logs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.989443 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-combined-ca-bundle\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.989506 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-config-data\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.989888 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e33520-34a2-4009-9f61-7b6211fa8744-logs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.989557 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-scripts\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.990224 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs2fx\" (UniqueName: \"kubernetes.io/projected/a0e33520-34a2-4009-9f61-7b6211fa8744-kube-api-access-cs2fx\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.990246 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-internal-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.990265 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-public-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.993366 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-public-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.994153 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-combined-ca-bundle\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.998013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-scripts\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.998269 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-config-data\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:49 crc kubenswrapper[4735]: I0131 15:15:49.998295 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0e33520-34a2-4009-9f61-7b6211fa8744-internal-tls-certs\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.005909 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs2fx\" (UniqueName: \"kubernetes.io/projected/a0e33520-34a2-4009-9f61-7b6211fa8744-kube-api-access-cs2fx\") pod \"placement-766f55bc7b-w8qbt\" (UID: \"a0e33520-34a2-4009-9f61-7b6211fa8744\") " pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.123996 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74cd5d5cd9-8xjdv"] Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.125255 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.128363 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.128620 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.147044 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cd5d5cd9-8xjdv"] Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.172121 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194200 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194240 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-ovndb-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-public-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194290 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-httpd-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-internal-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194513 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5xf\" (UniqueName: \"kubernetes.io/projected/e70f9259-db47-4290-9778-8bf2849a809a-kube-api-access-4m5xf\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.194575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-combined-ca-bundle\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-httpd-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295564 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-internal-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295629 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m5xf\" (UniqueName: \"kubernetes.io/projected/e70f9259-db47-4290-9778-8bf2849a809a-kube-api-access-4m5xf\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295665 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-combined-ca-bundle\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295716 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295736 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-ovndb-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.295754 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-public-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.303019 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-ovndb-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.303253 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.303937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-httpd-config\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.304484 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-public-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.306056 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-combined-ca-bundle\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.317254 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m5xf\" (UniqueName: \"kubernetes.io/projected/e70f9259-db47-4290-9778-8bf2849a809a-kube-api-access-4m5xf\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.319224 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70f9259-db47-4290-9778-8bf2849a809a-internal-tls-certs\") pod \"neutron-74cd5d5cd9-8xjdv\" (UID: \"e70f9259-db47-4290-9778-8bf2849a809a\") " pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.407661 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.407957 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.464269 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.600797 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.677880 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" event={"ID":"1d1e0f10-c620-4389-ad20-abc3cc647615","Type":"ContainerStarted","Data":"cdbcf6043ba7b8dc3c449dfea86c96188bb5749f548cd095bf9a9bc5a76bdd8b"} Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.678861 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.716468 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" podStartSLOduration=3.71641004 podStartE2EDuration="3.71641004s" podCreationTimestamp="2026-01-31 15:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:50.699099931 +0000 UTC m=+1036.472428983" watchObservedRunningTime="2026-01-31 15:15:50.71641004 +0000 UTC m=+1036.489739092" Jan 31 15:15:50 crc kubenswrapper[4735]: I0131 15:15:50.736496 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-766f55bc7b-w8qbt"] Jan 31 15:15:50 crc kubenswrapper[4735]: W0131 15:15:50.751975 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0e33520_34a2_4009_9f61_7b6211fa8744.slice/crio-a4ea9e6063fc5bcf57f2828693f550c6ea8f7a661ba5ecc2322d0f0f84a3238f WatchSource:0}: Error finding container a4ea9e6063fc5bcf57f2828693f550c6ea8f7a661ba5ecc2322d0f0f84a3238f: Status 404 returned error can't find the container with id a4ea9e6063fc5bcf57f2828693f550c6ea8f7a661ba5ecc2322d0f0f84a3238f Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.075580 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cd5d5cd9-8xjdv"] Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.690390 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt9w8" event={"ID":"972df6a4-e6ad-41de-9573-b80779a22bd3","Type":"ContainerStarted","Data":"955c680b58a38b7d1fa55ba801249925c6aaf884941f0dcb32f8676ac0e4aefe"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.693191 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bhf6c" event={"ID":"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98","Type":"ContainerStarted","Data":"ad591be32839c11079c08e1f7e61178986f7c56fc7415a2091ac9c13b27c614b"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.698083 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd5d5cd9-8xjdv" event={"ID":"e70f9259-db47-4290-9778-8bf2849a809a","Type":"ContainerStarted","Data":"b0c1864d434ecf43205307904a38b262293a3c831c5b8a09328cb40921a4bfa5"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.698115 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd5d5cd9-8xjdv" event={"ID":"e70f9259-db47-4290-9778-8bf2849a809a","Type":"ContainerStarted","Data":"7bb895d021e4d92e3cb3c95bd0d6a80881cb46278a0e2a1cc23b210d17f951d0"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.700504 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766f55bc7b-w8qbt" event={"ID":"a0e33520-34a2-4009-9f61-7b6211fa8744","Type":"ContainerStarted","Data":"9b8207277bae9020bfa2cf056f842756b936df3048e37927236cbc7cea89b07f"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.700632 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.700732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766f55bc7b-w8qbt" event={"ID":"a0e33520-34a2-4009-9f61-7b6211fa8744","Type":"ContainerStarted","Data":"a03803ac074743d189bb3fce8f90634e06ca61b9dbf6f417c23d2b05edc67ad6"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.700806 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766f55bc7b-w8qbt" event={"ID":"a0e33520-34a2-4009-9f61-7b6211fa8744","Type":"ContainerStarted","Data":"a4ea9e6063fc5bcf57f2828693f550c6ea8f7a661ba5ecc2322d0f0f84a3238f"} Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.700884 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.720911 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tt9w8" podStartSLOduration=3.482730024 podStartE2EDuration="52.720891237s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="2026-01-31 15:15:00.975914074 +0000 UTC m=+986.749243116" lastFinishedPulling="2026-01-31 15:15:50.214075287 +0000 UTC m=+1035.987404329" observedRunningTime="2026-01-31 15:15:51.712958573 +0000 UTC m=+1037.486287625" watchObservedRunningTime="2026-01-31 15:15:51.720891237 +0000 UTC m=+1037.494220269" Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.730792 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bhf6c" podStartSLOduration=2.69708563 podStartE2EDuration="52.730773217s" podCreationTimestamp="2026-01-31 15:14:59 +0000 UTC" firstStartedPulling="2026-01-31 15:15:00.991912597 +0000 UTC m=+986.765241639" lastFinishedPulling="2026-01-31 15:15:51.025600184 +0000 UTC m=+1036.798929226" observedRunningTime="2026-01-31 15:15:51.728342808 +0000 UTC m=+1037.501671860" watchObservedRunningTime="2026-01-31 15:15:51.730773217 +0000 UTC m=+1037.504102259" Jan 31 15:15:51 crc kubenswrapper[4735]: I0131 15:15:51.753087 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-766f55bc7b-w8qbt" podStartSLOduration=2.753071717 podStartE2EDuration="2.753071717s" podCreationTimestamp="2026-01-31 15:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:51.745610946 +0000 UTC m=+1037.518939978" watchObservedRunningTime="2026-01-31 15:15:51.753071717 +0000 UTC m=+1037.526400759" Jan 31 15:15:52 crc kubenswrapper[4735]: I0131 15:15:52.711228 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd5d5cd9-8xjdv" event={"ID":"e70f9259-db47-4290-9778-8bf2849a809a","Type":"ContainerStarted","Data":"9baf7e528424ac1e58d7a281ed616af16a5a30202526b6179ddef276898d1c46"} Jan 31 15:15:52 crc kubenswrapper[4735]: I0131 15:15:52.747880 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74cd5d5cd9-8xjdv" podStartSLOduration=2.747863072 podStartE2EDuration="2.747863072s" podCreationTimestamp="2026-01-31 15:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:52.736350756 +0000 UTC m=+1038.509679808" watchObservedRunningTime="2026-01-31 15:15:52.747863072 +0000 UTC m=+1038.521192104" Jan 31 15:15:53 crc kubenswrapper[4735]: I0131 15:15:53.729701 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:15:57 crc kubenswrapper[4735]: I0131 15:15:57.777513 4735 generic.go:334] "Generic (PLEG): container finished" podID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" containerID="ad591be32839c11079c08e1f7e61178986f7c56fc7415a2091ac9c13b27c614b" exitCode=0 Jan 31 15:15:57 crc kubenswrapper[4735]: I0131 15:15:57.777572 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bhf6c" event={"ID":"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98","Type":"ContainerDied","Data":"ad591be32839c11079c08e1f7e61178986f7c56fc7415a2091ac9c13b27c614b"} Jan 31 15:15:57 crc kubenswrapper[4735]: E0131 15:15:57.944456 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.111692 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.184011 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.184378 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="dnsmasq-dns" containerID="cri-o://7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279" gracePeriod=10 Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.268406 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.402167 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-784979f994-vtd4m" podUID="c022909b-46cd-4e9d-851e-483e23358bd8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.644816 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786162 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786579 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786608 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786649 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt78n\" (UniqueName: \"kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786694 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.786720 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc\") pod \"4cf7c76c-9099-41cd-9705-a60c323046a4\" (UID: \"4cf7c76c-9099-41cd-9705-a60c323046a4\") " Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.791100 4735 generic.go:334] "Generic (PLEG): container finished" podID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerID="7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279" exitCode=0 Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.791173 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.791195 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerDied","Data":"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279"} Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.791246 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-cvvlz" event={"ID":"4cf7c76c-9099-41cd-9705-a60c323046a4","Type":"ContainerDied","Data":"58e60cc917e176686458cf73fd3f138433eb8ee743250f70e62ce9e6e251c1a8"} Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.791263 4735 scope.go:117] "RemoveContainer" containerID="7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.805234 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="ceilometer-notification-agent" containerID="cri-o://9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172" gracePeriod=30 Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.805332 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerStarted","Data":"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec"} Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.805381 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.805751 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="proxy-httpd" containerID="cri-o://15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec" gracePeriod=30 Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.805822 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="sg-core" containerID="cri-o://fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50" gracePeriod=30 Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.809103 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n" (OuterVolumeSpecName: "kube-api-access-bt78n") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "kube-api-access-bt78n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.852207 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.856170 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.856258 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config" (OuterVolumeSpecName: "config") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.859456 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.860560 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cf7c76c-9099-41cd-9705-a60c323046a4" (UID: "4cf7c76c-9099-41cd-9705-a60c323046a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.888953 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.888986 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.888997 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.889024 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt78n\" (UniqueName: \"kubernetes.io/projected/4cf7c76c-9099-41cd-9705-a60c323046a4-kube-api-access-bt78n\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.889036 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:58 crc kubenswrapper[4735]: I0131 15:15:58.889046 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cf7c76c-9099-41cd-9705-a60c323046a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.042645 4735 scope.go:117] "RemoveContainer" containerID="dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.068971 4735 scope.go:117] "RemoveContainer" containerID="7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279" Jan 31 15:15:59 crc kubenswrapper[4735]: E0131 15:15:59.069529 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279\": container with ID starting with 7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279 not found: ID does not exist" containerID="7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.069568 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279"} err="failed to get container status \"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279\": rpc error: code = NotFound desc = could not find container \"7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279\": container with ID starting with 7236647eb801e897a84b3934354a7fed7316a137beaedad69309c0c49e84e279 not found: ID does not exist" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.069590 4735 scope.go:117] "RemoveContainer" containerID="dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6" Jan 31 15:15:59 crc kubenswrapper[4735]: E0131 15:15:59.070000 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6\": container with ID starting with dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6 not found: ID does not exist" containerID="dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.070053 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6"} err="failed to get container status \"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6\": rpc error: code = NotFound desc = could not find container \"dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6\": container with ID starting with dde85d117a311f18286d53c24dd3b048d0c3c476022eefa375d032808c1f59f6 not found: ID does not exist" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.091503 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.128777 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.135554 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-cvvlz"] Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.197524 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w7dj\" (UniqueName: \"kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj\") pod \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.197698 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data\") pod \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.197823 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle\") pod \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\" (UID: \"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98\") " Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.203674 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj" (OuterVolumeSpecName: "kube-api-access-4w7dj") pod "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" (UID: "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98"). InnerVolumeSpecName "kube-api-access-4w7dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.205125 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" (UID: "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.230363 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" (UID: "c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.300987 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.301020 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.301069 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w7dj\" (UniqueName: \"kubernetes.io/projected/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98-kube-api-access-4w7dj\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.553013 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" path="/var/lib/kubelet/pods/4cf7c76c-9099-41cd-9705-a60c323046a4/volumes" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.815240 4735 generic.go:334] "Generic (PLEG): container finished" podID="972df6a4-e6ad-41de-9573-b80779a22bd3" containerID="955c680b58a38b7d1fa55ba801249925c6aaf884941f0dcb32f8676ac0e4aefe" exitCode=0 Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.815334 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt9w8" event={"ID":"972df6a4-e6ad-41de-9573-b80779a22bd3","Type":"ContainerDied","Data":"955c680b58a38b7d1fa55ba801249925c6aaf884941f0dcb32f8676ac0e4aefe"} Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.817131 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bhf6c" event={"ID":"c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98","Type":"ContainerDied","Data":"59557337d60d8c3bee0ae901c5e34728389d897424f8ddf1b60c47f4f5d4abe5"} Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.817178 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59557337d60d8c3bee0ae901c5e34728389d897424f8ddf1b60c47f4f5d4abe5" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.817242 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bhf6c" Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.825082 4735 generic.go:334] "Generic (PLEG): container finished" podID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerID="15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec" exitCode=0 Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.825120 4735 generic.go:334] "Generic (PLEG): container finished" podID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerID="fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50" exitCode=2 Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.825155 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerDied","Data":"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec"} Jan 31 15:15:59 crc kubenswrapper[4735]: I0131 15:15:59.825187 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerDied","Data":"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50"} Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.044790 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-66c7c44665-k447s"] Jan 31 15:16:00 crc kubenswrapper[4735]: E0131 15:16:00.045231 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="init" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.045253 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="init" Jan 31 15:16:00 crc kubenswrapper[4735]: E0131 15:16:00.045285 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" containerName="barbican-db-sync" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.045294 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" containerName="barbican-db-sync" Jan 31 15:16:00 crc kubenswrapper[4735]: E0131 15:16:00.045312 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="dnsmasq-dns" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.045321 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="dnsmasq-dns" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.045538 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" containerName="barbican-db-sync" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.045584 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf7c76c-9099-41cd-9705-a60c323046a4" containerName="dnsmasq-dns" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.049291 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.054285 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.058009 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l2hgl" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.058254 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.065440 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.071480 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.081209 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.087143 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.113690 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8df5434-b30b-49b1-9130-b152a98f3af0-logs\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.113803 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwmt\" (UniqueName: \"kubernetes.io/projected/c8df5434-b30b-49b1-9130-b152a98f3af0-kube-api-access-rbwmt\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.113839 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.113860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data-custom\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.113910 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-combined-ca-bundle\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.127560 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66c7c44665-k447s"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.165467 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.168171 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.181821 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215650 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215706 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data-custom\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxt8\" (UniqueName: \"kubernetes.io/projected/df8f8f18-32ae-4729-9e50-304d7dfdbf07-kube-api-access-5lxt8\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215815 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-combined-ca-bundle\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215861 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8df5434-b30b-49b1-9130-b152a98f3af0-logs\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215896 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-combined-ca-bundle\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215938 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f8f18-32ae-4729-9e50-304d7dfdbf07-logs\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.215984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data-custom\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.216025 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwmt\" (UniqueName: \"kubernetes.io/projected/c8df5434-b30b-49b1-9130-b152a98f3af0-kube-api-access-rbwmt\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.216056 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.216370 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8df5434-b30b-49b1-9130-b152a98f3af0-logs\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.225350 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data-custom\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.236548 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-combined-ca-bundle\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.237975 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8df5434-b30b-49b1-9130-b152a98f3af0-config-data\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.246535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwmt\" (UniqueName: \"kubernetes.io/projected/c8df5434-b30b-49b1-9130-b152a98f3af0-kube-api-access-rbwmt\") pod \"barbican-worker-66c7c44665-k447s\" (UID: \"c8df5434-b30b-49b1-9130-b152a98f3af0\") " pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.264990 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.267032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.269474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.292643 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.317800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.317940 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data-custom\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318063 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grms\" (UniqueName: \"kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318148 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318228 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318341 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318386 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxt8\" (UniqueName: \"kubernetes.io/projected/df8f8f18-32ae-4729-9e50-304d7dfdbf07-kube-api-access-5lxt8\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318537 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318565 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318594 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-combined-ca-bundle\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.318684 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f8f18-32ae-4729-9e50-304d7dfdbf07-logs\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.319106 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8f8f18-32ae-4729-9e50-304d7dfdbf07-logs\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.322490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data-custom\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.323952 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-config-data\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.327504 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8f8f18-32ae-4729-9e50-304d7dfdbf07-combined-ca-bundle\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.338907 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxt8\" (UniqueName: \"kubernetes.io/projected/df8f8f18-32ae-4729-9e50-304d7dfdbf07-kube-api-access-5lxt8\") pod \"barbican-keystone-listener-7c7cb96f6b-ctvpf\" (UID: \"df8f8f18-32ae-4729-9e50-304d7dfdbf07\") " pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.373179 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66c7c44665-k447s" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.392445 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420003 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420380 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420435 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420455 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grms\" (UniqueName: \"kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420548 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420612 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.420637 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm4sg\" (UniqueName: \"kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.421455 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.421627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.422122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.422355 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.422678 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.441444 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grms\" (UniqueName: \"kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms\") pod \"dnsmasq-dns-85ff748b95-h4mph\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.484650 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.524578 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.525258 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.525300 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.525361 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.525407 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm4sg\" (UniqueName: \"kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.526125 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.531507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.538219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.538315 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.542728 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm4sg\" (UniqueName: \"kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg\") pod \"barbican-api-85697fdc46-fxl79\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.701712 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.843562 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66c7c44665-k447s"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.866159 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf"] Jan 31 15:16:00 crc kubenswrapper[4735]: I0131 15:16:00.995251 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:01 crc kubenswrapper[4735]: W0131 15:16:01.003387 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c5f0076_c630_4d4e_9072_b5dcbb0c9ea2.slice/crio-30cdcf0fd2e2f88323d3ba8c784e692ed91a95ae34f4d7ccb979374d0fe80e6d WatchSource:0}: Error finding container 30cdcf0fd2e2f88323d3ba8c784e692ed91a95ae34f4d7ccb979374d0fe80e6d: Status 404 returned error can't find the container with id 30cdcf0fd2e2f88323d3ba8c784e692ed91a95ae34f4d7ccb979374d0fe80e6d Jan 31 15:16:01 crc kubenswrapper[4735]: W0131 15:16:01.164545 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b41e44c_4475_4eca_ac0f_89889a2a0dc6.slice/crio-c9c8164f56e77170d5f2fb3b9d0353a60d786fea58f62af6807f9e6d89eb7aba WatchSource:0}: Error finding container c9c8164f56e77170d5f2fb3b9d0353a60d786fea58f62af6807f9e6d89eb7aba: Status 404 returned error can't find the container with id c9c8164f56e77170d5f2fb3b9d0353a60d786fea58f62af6807f9e6d89eb7aba Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.166636 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.196847 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.350832 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.350900 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.350941 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.350967 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2xp\" (UniqueName: \"kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.351045 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.351082 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data\") pod \"972df6a4-e6ad-41de-9573-b80779a22bd3\" (UID: \"972df6a4-e6ad-41de-9573-b80779a22bd3\") " Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.351194 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.352028 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/972df6a4-e6ad-41de-9573-b80779a22bd3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.356240 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts" (OuterVolumeSpecName: "scripts") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.357806 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.364602 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp" (OuterVolumeSpecName: "kube-api-access-8j2xp") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "kube-api-access-8j2xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.394178 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.420314 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data" (OuterVolumeSpecName: "config-data") pod "972df6a4-e6ad-41de-9573-b80779a22bd3" (UID: "972df6a4-e6ad-41de-9573-b80779a22bd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.456907 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.456941 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.456950 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.456963 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972df6a4-e6ad-41de-9573-b80779a22bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.456971 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2xp\" (UniqueName: \"kubernetes.io/projected/972df6a4-e6ad-41de-9573-b80779a22bd3-kube-api-access-8j2xp\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.847924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" event={"ID":"df8f8f18-32ae-4729-9e50-304d7dfdbf07","Type":"ContainerStarted","Data":"880048073ba6c7ca5afb047b7ec2c8d248834c9d902acc6897330f6b9910f68b"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.856980 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt9w8" event={"ID":"972df6a4-e6ad-41de-9573-b80779a22bd3","Type":"ContainerDied","Data":"a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.857025 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6f009554474040a0af6fff631053b75d9e8b2f06470c5fbc4c2e6d928ed1752" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.857100 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt9w8" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.859657 4735 generic.go:334] "Generic (PLEG): container finished" podID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerID="9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e" exitCode=0 Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.859751 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" event={"ID":"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2","Type":"ContainerDied","Data":"9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.859787 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" event={"ID":"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2","Type":"ContainerStarted","Data":"30cdcf0fd2e2f88323d3ba8c784e692ed91a95ae34f4d7ccb979374d0fe80e6d"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.867295 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerStarted","Data":"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.867335 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerStarted","Data":"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.867344 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerStarted","Data":"c9c8164f56e77170d5f2fb3b9d0353a60d786fea58f62af6807f9e6d89eb7aba"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.867814 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.867995 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.869580 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66c7c44665-k447s" event={"ID":"c8df5434-b30b-49b1-9130-b152a98f3af0","Type":"ContainerStarted","Data":"25fc56bef9cc268eb7f3bece4b576e020ab079bf058bcc3cb47e1dd5940b7be7"} Jan 31 15:16:01 crc kubenswrapper[4735]: I0131 15:16:01.912526 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85697fdc46-fxl79" podStartSLOduration=1.91250375 podStartE2EDuration="1.91250375s" podCreationTimestamp="2026-01-31 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:01.895349386 +0000 UTC m=+1047.668678438" watchObservedRunningTime="2026-01-31 15:16:01.91250375 +0000 UTC m=+1047.685832792" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.183929 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:02 crc kubenswrapper[4735]: E0131 15:16:02.184290 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" containerName="cinder-db-sync" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.184301 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" containerName="cinder-db-sync" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.184485 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" containerName="cinder-db-sync" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.185507 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.187385 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l628k" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.196599 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.196757 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.196858 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.227562 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.262139 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273486 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273531 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273592 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273663 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnf45\" (UniqueName: \"kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273691 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.273705 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.308227 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.311616 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.325954 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375158 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375277 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnf45\" (UniqueName: \"kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375314 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375332 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375363 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.375381 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.381536 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.383955 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.389814 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.395587 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.396301 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.406942 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnf45\" (UniqueName: \"kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45\") pod \"cinder-scheduler-0\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.455558 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.457069 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.461258 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.477521 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483302 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b58w\" (UniqueName: \"kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483346 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483384 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483407 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483460 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.483641 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.559195 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585662 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585717 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585768 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585786 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8fxz\" (UniqueName: \"kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585805 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585841 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b58w\" (UniqueName: \"kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585917 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585940 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585962 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.585982 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.586002 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.586019 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.587164 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.587844 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.588157 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.588453 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.588733 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.605793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b58w\" (UniqueName: \"kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w\") pod \"dnsmasq-dns-5c9776ccc5-ngkj8\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.642479 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.687873 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688012 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688076 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688106 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688182 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8fxz\" (UniqueName: \"kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688726 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.688959 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.692625 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.693122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.693235 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.693648 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.710500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8fxz\" (UniqueName: \"kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz\") pod \"cinder-api-0\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " pod="openstack/cinder-api-0" Jan 31 15:16:02 crc kubenswrapper[4735]: I0131 15:16:02.783839 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:03 crc kubenswrapper[4735]: I0131 15:16:03.377249 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:03 crc kubenswrapper[4735]: W0131 15:16:03.384789 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc33d8ce_5c36_40c5_b07f_9c558d23f153.slice/crio-9dabfa7ac4b1eb7609fe0988be8c1b3f373260c758de48678dddb6f97cc63d86 WatchSource:0}: Error finding container 9dabfa7ac4b1eb7609fe0988be8c1b3f373260c758de48678dddb6f97cc63d86: Status 404 returned error can't find the container with id 9dabfa7ac4b1eb7609fe0988be8c1b3f373260c758de48678dddb6f97cc63d86 Jan 31 15:16:03 crc kubenswrapper[4735]: I0131 15:16:03.430639 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:16:03 crc kubenswrapper[4735]: W0131 15:16:03.434941 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf56db27d_6892_4506_b605_6198658b7f6d.slice/crio-35f2142428a6f80c2bc5e871baff12076b9ba8422fb7bf09e6d7f10f82b4c35e WatchSource:0}: Error finding container 35f2142428a6f80c2bc5e871baff12076b9ba8422fb7bf09e6d7f10f82b4c35e: Status 404 returned error can't find the container with id 35f2142428a6f80c2bc5e871baff12076b9ba8422fb7bf09e6d7f10f82b4c35e Jan 31 15:16:03 crc kubenswrapper[4735]: I0131 15:16:03.506706 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.002385 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerStarted","Data":"9dabfa7ac4b1eb7609fe0988be8c1b3f373260c758de48678dddb6f97cc63d86"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.005293 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" event={"ID":"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2","Type":"ContainerStarted","Data":"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.005359 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="dnsmasq-dns" containerID="cri-o://9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06" gracePeriod=10 Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.005373 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.007923 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66c7c44665-k447s" event={"ID":"c8df5434-b30b-49b1-9130-b152a98f3af0","Type":"ContainerStarted","Data":"168f3668c91ec9acefc781f7a882c988301852dd472370df7cc2e3bd6c6e1d2d"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.007949 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66c7c44665-k447s" event={"ID":"c8df5434-b30b-49b1-9130-b152a98f3af0","Type":"ContainerStarted","Data":"678a751c6f2eec77e4ddafe9713cf8a45f3f6c5e7d8d089c0d8bd6e11cdd712f"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.011622 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" event={"ID":"df8f8f18-32ae-4729-9e50-304d7dfdbf07","Type":"ContainerStarted","Data":"5c60c4971b8aedac492ba4d45a4354ecc25015e7fabd3220e4d9293dd1a49435"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.011650 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" event={"ID":"df8f8f18-32ae-4729-9e50-304d7dfdbf07","Type":"ContainerStarted","Data":"4cbdb0f8d8da8f3dcb82a7c5383fde1463795b6828e30d2f9be7c13783b6fff8"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.019734 4735 generic.go:334] "Generic (PLEG): container finished" podID="f56db27d-6892-4506-b605-6198658b7f6d" containerID="5894c58035f656275e52344fc96699db27c30eab24ac59ee56e62612df8532c4" exitCode=0 Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.019788 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" event={"ID":"f56db27d-6892-4506-b605-6198658b7f6d","Type":"ContainerDied","Data":"5894c58035f656275e52344fc96699db27c30eab24ac59ee56e62612df8532c4"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.019811 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" event={"ID":"f56db27d-6892-4506-b605-6198658b7f6d","Type":"ContainerStarted","Data":"35f2142428a6f80c2bc5e871baff12076b9ba8422fb7bf09e6d7f10f82b4c35e"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.025739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerStarted","Data":"45558fa9bac57c1dfbf572ca4ef79cd0512ccb790c94c546565e47bb1cc98f5f"} Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.029130 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" podStartSLOduration=4.029111309 podStartE2EDuration="4.029111309s" podCreationTimestamp="2026-01-31 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:04.025191459 +0000 UTC m=+1049.798520521" watchObservedRunningTime="2026-01-31 15:16:04.029111309 +0000 UTC m=+1049.802440351" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.062112 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c7cb96f6b-ctvpf" podStartSLOduration=2.112887752 podStartE2EDuration="4.062089051s" podCreationTimestamp="2026-01-31 15:16:00 +0000 UTC" firstStartedPulling="2026-01-31 15:16:00.895840588 +0000 UTC m=+1046.669169630" lastFinishedPulling="2026-01-31 15:16:02.845041887 +0000 UTC m=+1048.618370929" observedRunningTime="2026-01-31 15:16:04.052823619 +0000 UTC m=+1049.826152651" watchObservedRunningTime="2026-01-31 15:16:04.062089051 +0000 UTC m=+1049.835418093" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.081283 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-66c7c44665-k447s" podStartSLOduration=2.159631914 podStartE2EDuration="4.081263143s" podCreationTimestamp="2026-01-31 15:16:00 +0000 UTC" firstStartedPulling="2026-01-31 15:16:00.899002808 +0000 UTC m=+1046.672331850" lastFinishedPulling="2026-01-31 15:16:02.820634037 +0000 UTC m=+1048.593963079" observedRunningTime="2026-01-31 15:16:04.076170449 +0000 UTC m=+1049.849499491" watchObservedRunningTime="2026-01-31 15:16:04.081263143 +0000 UTC m=+1049.854592185" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.682669 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.709647 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856258 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856318 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856366 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856482 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8d44\" (UniqueName: \"kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856507 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856532 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856571 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856588 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856612 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856657 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts\") pod \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\" (UID: \"3b5bc7d4-ba0a-4ed0-990a-44186c837298\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856676 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8grms\" (UniqueName: \"kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.856708 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb\") pod \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\" (UID: \"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2\") " Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.859653 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.859896 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.873531 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts" (OuterVolumeSpecName: "scripts") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.874909 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44" (OuterVolumeSpecName: "kube-api-access-c8d44") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "kube-api-access-c8d44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.877663 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms" (OuterVolumeSpecName: "kube-api-access-8grms") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "kube-api-access-8grms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.931389 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config" (OuterVolumeSpecName: "config") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.941209 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.945851 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.948334 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.952015 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.957122 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958296 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958324 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958334 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958345 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8d44\" (UniqueName: \"kubernetes.io/projected/3b5bc7d4-ba0a-4ed0-990a-44186c837298-kube-api-access-c8d44\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958354 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958364 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b5bc7d4-ba0a-4ed0-990a-44186c837298-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958378 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958386 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958393 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958402 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8grms\" (UniqueName: \"kubernetes.io/projected/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-kube-api-access-8grms\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.958410 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.963128 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data" (OuterVolumeSpecName: "config-data") pod "3b5bc7d4-ba0a-4ed0-990a-44186c837298" (UID: "3b5bc7d4-ba0a-4ed0-990a-44186c837298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:04 crc kubenswrapper[4735]: I0131 15:16:04.982437 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" (UID: "5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.036766 4735 generic.go:334] "Generic (PLEG): container finished" podID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerID="9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06" exitCode=0 Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.036816 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.036830 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" event={"ID":"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2","Type":"ContainerDied","Data":"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.036863 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4mph" event={"ID":"5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2","Type":"ContainerDied","Data":"30cdcf0fd2e2f88323d3ba8c784e692ed91a95ae34f4d7ccb979374d0fe80e6d"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.036908 4735 scope.go:117] "RemoveContainer" containerID="9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.040251 4735 generic.go:334] "Generic (PLEG): container finished" podID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerID="9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172" exitCode=0 Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.040300 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerDied","Data":"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.040315 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.040341 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b5bc7d4-ba0a-4ed0-990a-44186c837298","Type":"ContainerDied","Data":"f2d52f77d5e893c14b8e4474dcfdfdbb88969330afacf1854bf7b78c27520a93"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.046230 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" event={"ID":"f56db27d-6892-4506-b605-6198658b7f6d","Type":"ContainerStarted","Data":"126f609286877bdc472e0350ee5ae624b8d437690dff7d0f597c352d84fbf723"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.046332 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.048811 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerStarted","Data":"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf"} Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.059838 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5bc7d4-ba0a-4ed0-990a-44186c837298-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.059881 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.065688 4735 scope.go:117] "RemoveContainer" containerID="9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.072086 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" podStartSLOduration=3.072069405 podStartE2EDuration="3.072069405s" podCreationTimestamp="2026-01-31 15:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:05.066408165 +0000 UTC m=+1050.839737227" watchObservedRunningTime="2026-01-31 15:16:05.072069405 +0000 UTC m=+1050.845398447" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.103361 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.104257 4735 scope.go:117] "RemoveContainer" containerID="9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.106479 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06\": container with ID starting with 9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06 not found: ID does not exist" containerID="9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.106508 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06"} err="failed to get container status \"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06\": rpc error: code = NotFound desc = could not find container \"9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06\": container with ID starting with 9d0e7037c189ccf8869f24bd5b84010830b98ae566ef6dd739ab7edd9dbcfe06 not found: ID does not exist" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.106529 4735 scope.go:117] "RemoveContainer" containerID="9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.106801 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e\": container with ID starting with 9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e not found: ID does not exist" containerID="9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.106840 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e"} err="failed to get container status \"9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e\": rpc error: code = NotFound desc = could not find container \"9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e\": container with ID starting with 9a47c4b0169e5e3de443b19ad4c481c83e56d60a0136a73f6bb603fc7a34ba0e not found: ID does not exist" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.106866 4735 scope.go:117] "RemoveContainer" containerID="15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.124478 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4mph"] Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.167716 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.191800 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.194141 4735 scope.go:117] "RemoveContainer" containerID="fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.207089 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.208082 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="dnsmasq-dns" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208130 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="dnsmasq-dns" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.208153 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="ceilometer-notification-agent" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208162 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="ceilometer-notification-agent" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.208205 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="proxy-httpd" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208217 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="proxy-httpd" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.208233 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="sg-core" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208241 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="sg-core" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.208290 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="init" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208299 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="init" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208612 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="sg-core" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208637 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="ceilometer-notification-agent" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208653 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" containerName="dnsmasq-dns" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.208694 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" containerName="proxy-httpd" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.211285 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.212953 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.213439 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.213681 4735 scope.go:117] "RemoveContainer" containerID="9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.218451 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.274854 4735 scope.go:117] "RemoveContainer" containerID="15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.275356 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec\": container with ID starting with 15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec not found: ID does not exist" containerID="15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.275410 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec"} err="failed to get container status \"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec\": rpc error: code = NotFound desc = could not find container \"15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec\": container with ID starting with 15f096f9caa8ea938d31069bf06b70a9e028bed0d850a79ac7f6c6e84b41feec not found: ID does not exist" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.275471 4735 scope.go:117] "RemoveContainer" containerID="fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.276999 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50\": container with ID starting with fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50 not found: ID does not exist" containerID="fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.277041 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50"} err="failed to get container status \"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50\": rpc error: code = NotFound desc = could not find container \"fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50\": container with ID starting with fd788d947e75d9f05ee76a014df20e3dbc8848e000a99d7ef6930dfe9d791e50 not found: ID does not exist" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.277066 4735 scope.go:117] "RemoveContainer" containerID="9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172" Jan 31 15:16:05 crc kubenswrapper[4735]: E0131 15:16:05.278741 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172\": container with ID starting with 9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172 not found: ID does not exist" containerID="9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.278881 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172"} err="failed to get container status \"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172\": rpc error: code = NotFound desc = could not find container \"9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172\": container with ID starting with 9aad12b6a46a40993a3df9e10bf3798a90ebf5c487d8d05c660ad4019f3b4172 not found: ID does not exist" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.373856 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.373981 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.374003 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.374030 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.374048 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.374090 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pvw\" (UniqueName: \"kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.374244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.476001 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477052 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477097 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477129 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pvw\" (UniqueName: \"kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477234 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.477267 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.478075 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.480680 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.481392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.482753 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.487723 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.489044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.494379 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pvw\" (UniqueName: \"kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw\") pod \"ceilometer-0\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.566190 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.595145 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5bc7d4-ba0a-4ed0-990a-44186c837298" path="/var/lib/kubelet/pods/3b5bc7d4-ba0a-4ed0-990a-44186c837298/volumes" Jan 31 15:16:05 crc kubenswrapper[4735]: I0131 15:16:05.596127 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2" path="/var/lib/kubelet/pods/5c5f0076-c630-4d4e-9072-b5dcbb0c9ea2/volumes" Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.062500 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerStarted","Data":"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650"} Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.062590 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.065898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerStarted","Data":"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da"} Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.065930 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerStarted","Data":"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9"} Jan 31 15:16:06 crc kubenswrapper[4735]: W0131 15:16:06.068295 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ef4c78_5d10_4718_aae5_d10be18e46b8.slice/crio-6973f17867e63d00909af80134a90be87a9654a0e344fce957ff74356ac2b723 WatchSource:0}: Error finding container 6973f17867e63d00909af80134a90be87a9654a0e344fce957ff74356ac2b723: Status 404 returned error can't find the container with id 6973f17867e63d00909af80134a90be87a9654a0e344fce957ff74356ac2b723 Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.074001 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.097482 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.097457534 podStartE2EDuration="4.097457534s" podCreationTimestamp="2026-01-31 15:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.093801701 +0000 UTC m=+1051.867130753" watchObservedRunningTime="2026-01-31 15:16:06.097457534 +0000 UTC m=+1051.870786576" Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.119754 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.147252928 podStartE2EDuration="4.119736883s" podCreationTimestamp="2026-01-31 15:16:02 +0000 UTC" firstStartedPulling="2026-01-31 15:16:03.387795721 +0000 UTC m=+1049.161124763" lastFinishedPulling="2026-01-31 15:16:04.360279676 +0000 UTC m=+1050.133608718" observedRunningTime="2026-01-31 15:16:06.113877288 +0000 UTC m=+1051.887206340" watchObservedRunningTime="2026-01-31 15:16:06.119736883 +0000 UTC m=+1051.893065925" Jan 31 15:16:06 crc kubenswrapper[4735]: I0131 15:16:06.542456 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.058528 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-844d5857fb-gs56h"] Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.060392 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.070618 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.078322 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-844d5857fb-gs56h"] Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.079180 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.082595 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerStarted","Data":"560fa35246bee850c02eba85ff6dc72c679cf027147448c1f742213ae889ea8c"} Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.082685 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerStarted","Data":"6973f17867e63d00909af80134a90be87a9654a0e344fce957ff74356ac2b723"} Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213350 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data-custom\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ea2bff-49c7-4b54-a026-c7c632da1b0c-logs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-combined-ca-bundle\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213602 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxbq\" (UniqueName: \"kubernetes.io/projected/89ea2bff-49c7-4b54-a026-c7c632da1b0c-kube-api-access-8qxbq\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213700 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-internal-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.213762 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-public-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315179 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315230 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxbq\" (UniqueName: \"kubernetes.io/projected/89ea2bff-49c7-4b54-a026-c7c632da1b0c-kube-api-access-8qxbq\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315274 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-internal-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315308 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-public-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315370 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data-custom\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315390 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ea2bff-49c7-4b54-a026-c7c632da1b0c-logs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315405 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-combined-ca-bundle\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.315921 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ea2bff-49c7-4b54-a026-c7c632da1b0c-logs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.319288 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-internal-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.320913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-combined-ca-bundle\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.323971 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-public-tls-certs\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.331055 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data-custom\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.333800 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ea2bff-49c7-4b54-a026-c7c632da1b0c-config-data\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.340648 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxbq\" (UniqueName: \"kubernetes.io/projected/89ea2bff-49c7-4b54-a026-c7c632da1b0c-kube-api-access-8qxbq\") pod \"barbican-api-844d5857fb-gs56h\" (UID: \"89ea2bff-49c7-4b54-a026-c7c632da1b0c\") " pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.380856 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:07 crc kubenswrapper[4735]: I0131 15:16:07.580744 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.039199 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-844d5857fb-gs56h"] Jan 31 15:16:08 crc kubenswrapper[4735]: W0131 15:16:08.053529 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ea2bff_49c7_4b54_a026_c7c632da1b0c.slice/crio-cdb139b7330ccb5e5eb071c5ec07cbe2eab951483e31157b386370528689223a WatchSource:0}: Error finding container cdb139b7330ccb5e5eb071c5ec07cbe2eab951483e31157b386370528689223a: Status 404 returned error can't find the container with id cdb139b7330ccb5e5eb071c5ec07cbe2eab951483e31157b386370528689223a Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.092434 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-844d5857fb-gs56h" event={"ID":"89ea2bff-49c7-4b54-a026-c7c632da1b0c","Type":"ContainerStarted","Data":"cdb139b7330ccb5e5eb071c5ec07cbe2eab951483e31157b386370528689223a"} Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.095850 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerStarted","Data":"158feb2bf07f38e97a7a23fc4a92df0ecedbb4ccd368e2833bb03a14459f2257"} Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.096064 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api-log" containerID="cri-o://ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" gracePeriod=30 Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.096167 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api" containerID="cri-o://f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" gracePeriod=30 Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.628973 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.669194 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.669533 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.669631 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.669696 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.669765 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.670117 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8fxz\" (UniqueName: \"kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.670189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts\") pod \"277afd22-f5b0-4877-92c5-06427715eef8\" (UID: \"277afd22-f5b0-4877-92c5-06427715eef8\") " Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.681871 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz" (OuterVolumeSpecName: "kube-api-access-f8fxz") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "kube-api-access-f8fxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.682243 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs" (OuterVolumeSpecName: "logs") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.682281 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.685602 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts" (OuterVolumeSpecName: "scripts") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.699368 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.732894 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775283 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775327 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277afd22-f5b0-4877-92c5-06427715eef8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775339 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/277afd22-f5b0-4877-92c5-06427715eef8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775348 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8fxz\" (UniqueName: \"kubernetes.io/projected/277afd22-f5b0-4877-92c5-06427715eef8-kube-api-access-f8fxz\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775360 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.775368 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.799584 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data" (OuterVolumeSpecName: "config-data") pod "277afd22-f5b0-4877-92c5-06427715eef8" (UID: "277afd22-f5b0-4877-92c5-06427715eef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:08 crc kubenswrapper[4735]: I0131 15:16:08.878544 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277afd22-f5b0-4877-92c5-06427715eef8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.111502 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerStarted","Data":"40d97b7b93ec6cb017ef4f604e74afbb02a9dc5b5a836184ae4ae0c9083d1606"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114053 4735 generic.go:334] "Generic (PLEG): container finished" podID="277afd22-f5b0-4877-92c5-06427715eef8" containerID="f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" exitCode=0 Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114088 4735 generic.go:334] "Generic (PLEG): container finished" podID="277afd22-f5b0-4877-92c5-06427715eef8" containerID="ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" exitCode=143 Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114136 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerDied","Data":"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114156 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerDied","Data":"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114167 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"277afd22-f5b0-4877-92c5-06427715eef8","Type":"ContainerDied","Data":"45558fa9bac57c1dfbf572ca4ef79cd0512ccb790c94c546565e47bb1cc98f5f"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114199 4735 scope.go:117] "RemoveContainer" containerID="f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.114175 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.116893 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-844d5857fb-gs56h" event={"ID":"89ea2bff-49c7-4b54-a026-c7c632da1b0c","Type":"ContainerStarted","Data":"ca09013ecbfe52a1a08df52fc9dc03bc6f25c7265c8c312362c3b46356ea7265"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.116966 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-844d5857fb-gs56h" event={"ID":"89ea2bff-49c7-4b54-a026-c7c632da1b0c","Type":"ContainerStarted","Data":"f6a201a5a234cdb9e7739be5a00f2076ca3fce48c304020ae4c1d0a29f7a5da2"} Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.117217 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.141939 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-844d5857fb-gs56h" podStartSLOduration=2.141925186 podStartE2EDuration="2.141925186s" podCreationTimestamp="2026-01-31 15:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:09.136199194 +0000 UTC m=+1054.909528236" watchObservedRunningTime="2026-01-31 15:16:09.141925186 +0000 UTC m=+1054.915254228" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.161465 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.168359 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.199179 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:09 crc kubenswrapper[4735]: E0131 15:16:09.200042 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.200063 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api" Jan 31 15:16:09 crc kubenswrapper[4735]: E0131 15:16:09.200090 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api-log" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.200097 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api-log" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.200491 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api-log" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.200509 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="277afd22-f5b0-4877-92c5-06427715eef8" containerName="cinder-api" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.201520 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.206835 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.206875 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.207884 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.214339 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrl5\" (UniqueName: \"kubernetes.io/projected/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-kube-api-access-rjrl5\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287457 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287488 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287531 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-logs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287555 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287576 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287598 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-scripts\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.287624 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.301138 4735 scope.go:117] "RemoveContainer" containerID="ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.319811 4735 scope.go:117] "RemoveContainer" containerID="f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" Jan 31 15:16:09 crc kubenswrapper[4735]: E0131 15:16:09.320178 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650\": container with ID starting with f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650 not found: ID does not exist" containerID="f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.320208 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650"} err="failed to get container status \"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650\": rpc error: code = NotFound desc = could not find container \"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650\": container with ID starting with f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650 not found: ID does not exist" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.320252 4735 scope.go:117] "RemoveContainer" containerID="ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" Jan 31 15:16:09 crc kubenswrapper[4735]: E0131 15:16:09.320584 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf\": container with ID starting with ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf not found: ID does not exist" containerID="ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.320625 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf"} err="failed to get container status \"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf\": rpc error: code = NotFound desc = could not find container \"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf\": container with ID starting with ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf not found: ID does not exist" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.320638 4735 scope.go:117] "RemoveContainer" containerID="f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.320982 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650"} err="failed to get container status \"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650\": rpc error: code = NotFound desc = could not find container \"f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650\": container with ID starting with f706f94aab36e515942d05435190c4b0344e6028742570962f0fd2f0e8164650 not found: ID does not exist" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.321024 4735 scope.go:117] "RemoveContainer" containerID="ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.321437 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf"} err="failed to get container status \"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf\": rpc error: code = NotFound desc = could not find container \"ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf\": container with ID starting with ca809ac6c1a853645eb15cf9d89f4c3f75daaed159c796f03f1caed9ff45e0bf not found: ID does not exist" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388526 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrl5\" (UniqueName: \"kubernetes.io/projected/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-kube-api-access-rjrl5\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388603 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388636 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388681 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-logs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388705 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388725 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388747 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-scripts\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388771 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.388792 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.389246 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-logs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.389570 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.394929 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.395912 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.395945 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.396451 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.396555 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.397550 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-scripts\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.428540 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrl5\" (UniqueName: \"kubernetes.io/projected/c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f-kube-api-access-rjrl5\") pod \"cinder-api-0\" (UID: \"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f\") " pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.541206 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.550549 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277afd22-f5b0-4877-92c5-06427715eef8" path="/var/lib/kubelet/pods/277afd22-f5b0-4877-92c5-06427715eef8/volumes" Jan 31 15:16:09 crc kubenswrapper[4735]: I0131 15:16:09.979038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 15:16:09 crc kubenswrapper[4735]: W0131 15:16:09.983985 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e6f6c4_12bb_43d4_a77c_2ba5c2b79f1f.slice/crio-3d566fa80d8e6280926010b84503f926e8d141547e050e3021a24413e22f5b78 WatchSource:0}: Error finding container 3d566fa80d8e6280926010b84503f926e8d141547e050e3021a24413e22f5b78: Status 404 returned error can't find the container with id 3d566fa80d8e6280926010b84503f926e8d141547e050e3021a24413e22f5b78 Jan 31 15:16:10 crc kubenswrapper[4735]: I0131 15:16:10.129361 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f","Type":"ContainerStarted","Data":"3d566fa80d8e6280926010b84503f926e8d141547e050e3021a24413e22f5b78"} Jan 31 15:16:10 crc kubenswrapper[4735]: I0131 15:16:10.131469 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:10 crc kubenswrapper[4735]: I0131 15:16:10.324077 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:16:10 crc kubenswrapper[4735]: I0131 15:16:10.640136 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:16:11 crc kubenswrapper[4735]: I0131 15:16:11.143615 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f","Type":"ContainerStarted","Data":"ef41d6d09272eb522b064c61535947d865d22f325028e8d194b262ab1451f06d"} Jan 31 15:16:11 crc kubenswrapper[4735]: I0131 15:16:11.146611 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerStarted","Data":"179b1aa49ccfe1db3a77cbc8d0f2b5cc08bc044a598e063f077ea132ba4138f5"} Jan 31 15:16:11 crc kubenswrapper[4735]: I0131 15:16:11.147648 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.051947 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.086295 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.095352 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.960133358 podStartE2EDuration="7.095323276s" podCreationTimestamp="2026-01-31 15:16:05 +0000 UTC" firstStartedPulling="2026-01-31 15:16:06.07114611 +0000 UTC m=+1051.844475152" lastFinishedPulling="2026-01-31 15:16:10.206336028 +0000 UTC m=+1055.979665070" observedRunningTime="2026-01-31 15:16:11.176563779 +0000 UTC m=+1056.949892911" watchObservedRunningTime="2026-01-31 15:16:12.095323276 +0000 UTC m=+1057.868652348" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.192336 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f","Type":"ContainerStarted","Data":"928bcf2b0705cdee58e85d4f07789897d126d1e131c710ff62b9b04afdb099c6"} Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.192684 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.206497 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.251005 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.2509875839999998 podStartE2EDuration="3.250987584s" podCreationTimestamp="2026-01-31 15:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:12.228465048 +0000 UTC m=+1058.001794090" watchObservedRunningTime="2026-01-31 15:16:12.250987584 +0000 UTC m=+1058.024316616" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.355483 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-784979f994-vtd4m" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.411520 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.411901 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon-log" containerID="cri-o://c66d6894f5d7b8ef37aaaed37239780e27cdf90c09aed9df2f36f26fa6784c64" gracePeriod=30 Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.412109 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" containerID="cri-o://502456fd1de1026a07431dbd3dae3b054005bc33fecf428146696f57607d0db7" gracePeriod=30 Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.644638 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.700160 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:16:12 crc kubenswrapper[4735]: I0131 15:16:12.700389 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="dnsmasq-dns" containerID="cri-o://cdbcf6043ba7b8dc3c449dfea86c96188bb5749f548cd095bf9a9bc5a76bdd8b" gracePeriod=10 Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.079114 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.111546 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.151792 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.200052 4735 generic.go:334] "Generic (PLEG): container finished" podID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerID="cdbcf6043ba7b8dc3c449dfea86c96188bb5749f548cd095bf9a9bc5a76bdd8b" exitCode=0 Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.200140 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" event={"ID":"1d1e0f10-c620-4389-ad20-abc3cc647615","Type":"ContainerDied","Data":"cdbcf6043ba7b8dc3c449dfea86c96188bb5749f548cd095bf9a9bc5a76bdd8b"} Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.200401 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="cinder-scheduler" containerID="cri-o://95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9" gracePeriod=30 Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.200467 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="probe" containerID="cri-o://662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da" gracePeriod=30 Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.773346 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786359 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786465 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pjzz\" (UniqueName: \"kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786551 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786571 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786637 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.786710 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config\") pod \"1d1e0f10-c620-4389-ad20-abc3cc647615\" (UID: \"1d1e0f10-c620-4389-ad20-abc3cc647615\") " Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.792601 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz" (OuterVolumeSpecName: "kube-api-access-2pjzz") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "kube-api-access-2pjzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.862732 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.872491 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.891635 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.891665 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pjzz\" (UniqueName: \"kubernetes.io/projected/1d1e0f10-c620-4389-ad20-abc3cc647615-kube-api-access-2pjzz\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.891677 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.903899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.910554 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config" (OuterVolumeSpecName: "config") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.911769 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d1e0f10-c620-4389-ad20-abc3cc647615" (UID: "1d1e0f10-c620-4389-ad20-abc3cc647615"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.992721 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.992757 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:13 crc kubenswrapper[4735]: I0131 15:16:13.992769 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d1e0f10-c620-4389-ad20-abc3cc647615-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.212786 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.212786 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cgvb8" event={"ID":"1d1e0f10-c620-4389-ad20-abc3cc647615","Type":"ContainerDied","Data":"1e3003f65006eeedfefe80d636c07a03d78c7442dcf3f434342d8da5ab32c907"} Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.212914 4735 scope.go:117] "RemoveContainer" containerID="cdbcf6043ba7b8dc3c449dfea86c96188bb5749f548cd095bf9a9bc5a76bdd8b" Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.214997 4735 generic.go:334] "Generic (PLEG): container finished" podID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerID="662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da" exitCode=0 Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.215090 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerDied","Data":"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da"} Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.233383 4735 scope.go:117] "RemoveContainer" containerID="d367b4b669abc98897eba62da754837c09f7e183d1f0a41fca468ba66dd208b3" Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.256635 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:16:14 crc kubenswrapper[4735]: I0131 15:16:14.267615 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cgvb8"] Jan 31 15:16:15 crc kubenswrapper[4735]: I0131 15:16:15.556199 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" path="/var/lib/kubelet/pods/1d1e0f10-c620-4389-ad20-abc3cc647615/volumes" Jan 31 15:16:16 crc kubenswrapper[4735]: I0131 15:16:16.257986 4735 generic.go:334] "Generic (PLEG): container finished" podID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerID="502456fd1de1026a07431dbd3dae3b054005bc33fecf428146696f57607d0db7" exitCode=0 Jan 31 15:16:16 crc kubenswrapper[4735]: I0131 15:16:16.258096 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerDied","Data":"502456fd1de1026a07431dbd3dae3b054005bc33fecf428146696f57607d0db7"} Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.197646 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258296 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258678 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258757 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnf45\" (UniqueName: \"kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258826 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.258915 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts\") pod \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\" (UID: \"bc33d8ce-5c36-40c5-b07f-9c558d23f153\") " Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.260944 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.279738 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45" (OuterVolumeSpecName: "kube-api-access-cnf45") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "kube-api-access-cnf45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282628 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282742 4735 generic.go:334] "Generic (PLEG): container finished" podID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerID="95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9" exitCode=0 Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282785 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerDied","Data":"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9"} Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282817 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc33d8ce-5c36-40c5-b07f-9c558d23f153","Type":"ContainerDied","Data":"9dabfa7ac4b1eb7609fe0988be8c1b3f373260c758de48678dddb6f97cc63d86"} Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282816 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.282854 4735 scope.go:117] "RemoveContainer" containerID="662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.294247 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts" (OuterVolumeSpecName: "scripts") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.341261 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.360948 4735 scope.go:117] "RemoveContainer" containerID="95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.362284 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnf45\" (UniqueName: \"kubernetes.io/projected/bc33d8ce-5c36-40c5-b07f-9c558d23f153-kube-api-access-cnf45\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.362325 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc33d8ce-5c36-40c5-b07f-9c558d23f153-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.362336 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.362348 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.362357 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.378337 4735 scope.go:117] "RemoveContainer" containerID="662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da" Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.382581 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da\": container with ID starting with 662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da not found: ID does not exist" containerID="662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.382636 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da"} err="failed to get container status \"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da\": rpc error: code = NotFound desc = could not find container \"662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da\": container with ID starting with 662e020bed2dc0ba269fc930cf2e300f2a72a83045b16d6b087edfea23bbc0da not found: ID does not exist" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.382662 4735 scope.go:117] "RemoveContainer" containerID="95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9" Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.385348 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9\": container with ID starting with 95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9 not found: ID does not exist" containerID="95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.385447 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9"} err="failed to get container status \"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9\": rpc error: code = NotFound desc = could not find container \"95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9\": container with ID starting with 95cae9b2f24f5c7429b57440a6b2b0a645ba571b05b367b534074cfeb4ff48a9 not found: ID does not exist" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.389481 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data" (OuterVolumeSpecName: "config-data") pod "bc33d8ce-5c36-40c5-b07f-9c558d23f153" (UID: "bc33d8ce-5c36-40c5-b07f-9c558d23f153"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.464201 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc33d8ce-5c36-40c5-b07f-9c558d23f153-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.612876 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.628947 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.638776 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.639129 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="dnsmasq-dns" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639145 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="dnsmasq-dns" Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.639156 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="cinder-scheduler" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639162 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="cinder-scheduler" Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.639181 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="init" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639187 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="init" Jan 31 15:16:17 crc kubenswrapper[4735]: E0131 15:16:17.639200 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="probe" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639205 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="probe" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639369 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="probe" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639385 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" containerName="cinder-scheduler" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.639397 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d1e0f10-c620-4389-ad20-abc3cc647615" containerName="dnsmasq-dns" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.640494 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.650016 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.662845 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.667771 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/475cce0c-6c29-41e3-8c56-b5368f1b9e92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.667811 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-scripts\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.667828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.668142 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh296\" (UniqueName: \"kubernetes.io/projected/475cce0c-6c29-41e3-8c56-b5368f1b9e92-kube-api-access-rh296\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.668285 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.668314 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.670736 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b94ccc6d9-2fktc" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769750 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769834 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/475cce0c-6c29-41e3-8c56-b5368f1b9e92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-scripts\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769878 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.769960 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh296\" (UniqueName: \"kubernetes.io/projected/475cce0c-6c29-41e3-8c56-b5368f1b9e92-kube-api-access-rh296\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.770223 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/475cce0c-6c29-41e3-8c56-b5368f1b9e92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.774686 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.777416 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.778150 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-config-data\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.785841 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475cce0c-6c29-41e3-8c56-b5368f1b9e92-scripts\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.790141 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh296\" (UniqueName: \"kubernetes.io/projected/475cce0c-6c29-41e3-8c56-b5368f1b9e92-kube-api-access-rh296\") pod \"cinder-scheduler-0\" (UID: \"475cce0c-6c29-41e3-8c56-b5368f1b9e92\") " pod="openstack/cinder-scheduler-0" Jan 31 15:16:17 crc kubenswrapper[4735]: I0131 15:16:17.958435 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 15:16:18 crc kubenswrapper[4735]: I0131 15:16:18.268728 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 31 15:16:18 crc kubenswrapper[4735]: I0131 15:16:18.269070 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:16:18 crc kubenswrapper[4735]: I0131 15:16:18.448991 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.186099 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.302367 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-844d5857fb-gs56h" Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.349559 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"475cce0c-6c29-41e3-8c56-b5368f1b9e92","Type":"ContainerStarted","Data":"19e6743e2129f3cf22575705fdebfabd43c7910cdb743544a8ea97ee3e168468"} Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.349604 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"475cce0c-6c29-41e3-8c56-b5368f1b9e92","Type":"ContainerStarted","Data":"c14470903208321191f85a56f73135f9a836b6cdba01a45fff1d34c803ee7d3e"} Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.364296 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.364529 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85697fdc46-fxl79" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api-log" containerID="cri-o://83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1" gracePeriod=30 Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.364764 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85697fdc46-fxl79" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api" containerID="cri-o://c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4" gracePeriod=30 Jan 31 15:16:19 crc kubenswrapper[4735]: I0131 15:16:19.583792 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc33d8ce-5c36-40c5-b07f-9c558d23f153" path="/var/lib/kubelet/pods/bc33d8ce-5c36-40c5-b07f-9c558d23f153/volumes" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.360888 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"475cce0c-6c29-41e3-8c56-b5368f1b9e92","Type":"ContainerStarted","Data":"5aa7df92f689c7cab53cf2230cf9be25ba824e4cee023990b1106672ea284183"} Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.362798 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerID="83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1" exitCode=143 Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.362851 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerDied","Data":"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1"} Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.381920 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.381901768 podStartE2EDuration="3.381901768s" podCreationTimestamp="2026-01-31 15:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:20.380615941 +0000 UTC m=+1066.153944993" watchObservedRunningTime="2026-01-31 15:16:20.381901768 +0000 UTC m=+1066.155230810" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.479349 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74cd5d5cd9-8xjdv" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.561296 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.564825 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-894686c96-gsnzd" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-httpd" containerID="cri-o://65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77" gracePeriod=30 Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.561594 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-894686c96-gsnzd" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-api" containerID="cri-o://841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1" gracePeriod=30 Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.632494 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.634116 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.647824 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.648061 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.648272 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lsvll" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.648380 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.752140 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config-secret\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.752209 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.752289 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnpsr\" (UniqueName: \"kubernetes.io/projected/75603184-bd90-47b2-a5e2-c06e0c205001-kube-api-access-pnpsr\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.752349 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.853925 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config-secret\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.854010 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.854101 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnpsr\" (UniqueName: \"kubernetes.io/projected/75603184-bd90-47b2-a5e2-c06e0c205001-kube-api-access-pnpsr\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.854170 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.855920 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.860393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-openstack-config-secret\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.863019 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75603184-bd90-47b2-a5e2-c06e0c205001-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.892920 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnpsr\" (UniqueName: \"kubernetes.io/projected/75603184-bd90-47b2-a5e2-c06e0c205001-kube-api-access-pnpsr\") pod \"openstackclient\" (UID: \"75603184-bd90-47b2-a5e2-c06e0c205001\") " pod="openstack/openstackclient" Jan 31 15:16:20 crc kubenswrapper[4735]: I0131 15:16:20.983227 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.374625 4735 generic.go:334] "Generic (PLEG): container finished" podID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerID="65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77" exitCode=0 Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.374682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerDied","Data":"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77"} Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.486332 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.528729 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 15:16:21 crc kubenswrapper[4735]: W0131 15:16:21.531932 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75603184_bd90_47b2_a5e2_c06e0c205001.slice/crio-59c435fdad281ca6ed6c08dada792754fe15169e01cc6a9cac39cc660a90c78f WatchSource:0}: Error finding container 59c435fdad281ca6ed6c08dada792754fe15169e01cc6a9cac39cc660a90c78f: Status 404 returned error can't find the container with id 59c435fdad281ca6ed6c08dada792754fe15169e01cc6a9cac39cc660a90c78f Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.568835 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-766f55bc7b-w8qbt" Jan 31 15:16:21 crc kubenswrapper[4735]: I0131 15:16:21.838349 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 15:16:22 crc kubenswrapper[4735]: I0131 15:16:22.393671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75603184-bd90-47b2-a5e2-c06e0c205001","Type":"ContainerStarted","Data":"59c435fdad281ca6ed6c08dada792754fe15169e01cc6a9cac39cc660a90c78f"} Jan 31 15:16:22 crc kubenswrapper[4735]: I0131 15:16:22.736057 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85697fdc46-fxl79" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:55064->10.217.0.161:9311: read: connection reset by peer" Jan 31 15:16:22 crc kubenswrapper[4735]: I0131 15:16:22.736330 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85697fdc46-fxl79" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:55076->10.217.0.161:9311: read: connection reset by peer" Jan 31 15:16:22 crc kubenswrapper[4735]: I0131 15:16:22.959460 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.166291 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.328813 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle\") pod \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.329076 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm4sg\" (UniqueName: \"kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg\") pod \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.329217 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom\") pod \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.329298 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs\") pod \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.329319 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data\") pod \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\" (UID: \"1b41e44c-4475-4eca-ac0f-89889a2a0dc6\") " Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.329777 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs" (OuterVolumeSpecName: "logs") pod "1b41e44c-4475-4eca-ac0f-89889a2a0dc6" (UID: "1b41e44c-4475-4eca-ac0f-89889a2a0dc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.330285 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.334656 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b41e44c-4475-4eca-ac0f-89889a2a0dc6" (UID: "1b41e44c-4475-4eca-ac0f-89889a2a0dc6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.344539 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg" (OuterVolumeSpecName: "kube-api-access-tm4sg") pod "1b41e44c-4475-4eca-ac0f-89889a2a0dc6" (UID: "1b41e44c-4475-4eca-ac0f-89889a2a0dc6"). InnerVolumeSpecName "kube-api-access-tm4sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.369759 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b41e44c-4475-4eca-ac0f-89889a2a0dc6" (UID: "1b41e44c-4475-4eca-ac0f-89889a2a0dc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.392090 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data" (OuterVolumeSpecName: "config-data") pod "1b41e44c-4475-4eca-ac0f-89889a2a0dc6" (UID: "1b41e44c-4475-4eca-ac0f-89889a2a0dc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.405149 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerID="c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4" exitCode=0 Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.405199 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerDied","Data":"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4"} Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.405230 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85697fdc46-fxl79" event={"ID":"1b41e44c-4475-4eca-ac0f-89889a2a0dc6","Type":"ContainerDied","Data":"c9c8164f56e77170d5f2fb3b9d0353a60d786fea58f62af6807f9e6d89eb7aba"} Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.405249 4735 scope.go:117] "RemoveContainer" containerID="c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.405405 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85697fdc46-fxl79" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.431507 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm4sg\" (UniqueName: \"kubernetes.io/projected/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-kube-api-access-tm4sg\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.431818 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.431831 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.431843 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b41e44c-4475-4eca-ac0f-89889a2a0dc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.454025 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.464154 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85697fdc46-fxl79"] Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.470410 4735 scope.go:117] "RemoveContainer" containerID="83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.519274 4735 scope.go:117] "RemoveContainer" containerID="c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4" Jan 31 15:16:23 crc kubenswrapper[4735]: E0131 15:16:23.519683 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4\": container with ID starting with c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4 not found: ID does not exist" containerID="c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.519729 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4"} err="failed to get container status \"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4\": rpc error: code = NotFound desc = could not find container \"c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4\": container with ID starting with c427d3a38be8e40f9f8b150cbe1524e96005cd6c8535e8b45e25ff4cc4461df4 not found: ID does not exist" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.519757 4735 scope.go:117] "RemoveContainer" containerID="83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1" Jan 31 15:16:23 crc kubenswrapper[4735]: E0131 15:16:23.520316 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1\": container with ID starting with 83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1 not found: ID does not exist" containerID="83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.520365 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1"} err="failed to get container status \"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1\": rpc error: code = NotFound desc = could not find container \"83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1\": container with ID starting with 83c99c75c143791f32be5fac1e41a684d61cad81e496b732f6dc7c8028487ba1 not found: ID does not exist" Jan 31 15:16:23 crc kubenswrapper[4735]: I0131 15:16:23.549809 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" path="/var/lib/kubelet/pods/1b41e44c-4475-4eca-ac0f-89889a2a0dc6/volumes" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.456415 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f4f968b5f-slb77"] Jan 31 15:16:25 crc kubenswrapper[4735]: E0131 15:16:25.456990 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.457002 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api" Jan 31 15:16:25 crc kubenswrapper[4735]: E0131 15:16:25.457023 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api-log" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.457029 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api-log" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.457193 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.457235 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b41e44c-4475-4eca-ac0f-89889a2a0dc6" containerName="barbican-api-log" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.458120 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.460864 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.461277 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.461533 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.476410 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f4f968b5f-slb77"] Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxmz\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-kube-api-access-rlxmz\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572461 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-internal-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572611 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-public-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572801 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-etc-swift\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572870 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-combined-ca-bundle\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.572923 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-config-data\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.573039 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-log-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.573124 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-run-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.674875 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-combined-ca-bundle\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.674945 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-config-data\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675000 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-log-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675020 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-run-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675072 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxmz\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-kube-api-access-rlxmz\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675114 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-internal-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-public-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.675230 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-etc-swift\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.676624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-run-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.676840 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdf1b1c9-1210-4c8f-beba-1780efc67349-log-httpd\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.680961 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-etc-swift\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.681482 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-combined-ca-bundle\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.682202 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-public-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.682256 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-config-data\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.682493 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdf1b1c9-1210-4c8f-beba-1780efc67349-internal-tls-certs\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.698732 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxmz\" (UniqueName: \"kubernetes.io/projected/bdf1b1c9-1210-4c8f-beba-1780efc67349-kube-api-access-rlxmz\") pod \"swift-proxy-f4f968b5f-slb77\" (UID: \"bdf1b1c9-1210-4c8f-beba-1780efc67349\") " pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:25 crc kubenswrapper[4735]: I0131 15:16:25.781247 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.001362 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.184797 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle\") pod \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.184895 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhpg9\" (UniqueName: \"kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9\") pod \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.184923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config\") pod \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.184965 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config\") pod \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.185015 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs\") pod \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\" (UID: \"abd7c543-6a9a-4bbc-8162-88dfa7239b61\") " Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.190046 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9" (OuterVolumeSpecName: "kube-api-access-fhpg9") pod "abd7c543-6a9a-4bbc-8162-88dfa7239b61" (UID: "abd7c543-6a9a-4bbc-8162-88dfa7239b61"). InnerVolumeSpecName "kube-api-access-fhpg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.192586 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "abd7c543-6a9a-4bbc-8162-88dfa7239b61" (UID: "abd7c543-6a9a-4bbc-8162-88dfa7239b61"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.241592 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config" (OuterVolumeSpecName: "config") pod "abd7c543-6a9a-4bbc-8162-88dfa7239b61" (UID: "abd7c543-6a9a-4bbc-8162-88dfa7239b61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.246462 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abd7c543-6a9a-4bbc-8162-88dfa7239b61" (UID: "abd7c543-6a9a-4bbc-8162-88dfa7239b61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.263678 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "abd7c543-6a9a-4bbc-8162-88dfa7239b61" (UID: "abd7c543-6a9a-4bbc-8162-88dfa7239b61"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.287358 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhpg9\" (UniqueName: \"kubernetes.io/projected/abd7c543-6a9a-4bbc-8162-88dfa7239b61-kube-api-access-fhpg9\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.287400 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.287416 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.287449 4735 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.287460 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd7c543-6a9a-4bbc-8162-88dfa7239b61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.321612 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f4f968b5f-slb77"] Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.438282 4735 generic.go:334] "Generic (PLEG): container finished" podID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerID="841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1" exitCode=0 Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.438336 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerDied","Data":"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1"} Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.438362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-894686c96-gsnzd" event={"ID":"abd7c543-6a9a-4bbc-8162-88dfa7239b61","Type":"ContainerDied","Data":"626a6753ae2598a75d5b97890ecfca93df415a3595fa09ab34d95162fb8b5dcd"} Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.438379 4735 scope.go:117] "RemoveContainer" containerID="65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.438519 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-894686c96-gsnzd" Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.471119 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.481870 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-894686c96-gsnzd"] Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.936535 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.937109 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-central-agent" containerID="cri-o://560fa35246bee850c02eba85ff6dc72c679cf027147448c1f742213ae889ea8c" gracePeriod=30 Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.937168 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="sg-core" containerID="cri-o://40d97b7b93ec6cb017ef4f604e74afbb02a9dc5b5a836184ae4ae0c9083d1606" gracePeriod=30 Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.937217 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="proxy-httpd" containerID="cri-o://179b1aa49ccfe1db3a77cbc8d0f2b5cc08bc044a598e063f077ea132ba4138f5" gracePeriod=30 Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.937237 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-notification-agent" containerID="cri-o://158feb2bf07f38e97a7a23fc4a92df0ecedbb4ccd368e2833bb03a14459f2257" gracePeriod=30 Jan 31 15:16:26 crc kubenswrapper[4735]: I0131 15:16:26.947217 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453228 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerID="179b1aa49ccfe1db3a77cbc8d0f2b5cc08bc044a598e063f077ea132ba4138f5" exitCode=0 Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453636 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerID="40d97b7b93ec6cb017ef4f604e74afbb02a9dc5b5a836184ae4ae0c9083d1606" exitCode=2 Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453321 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerDied","Data":"179b1aa49ccfe1db3a77cbc8d0f2b5cc08bc044a598e063f077ea132ba4138f5"} Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerDied","Data":"40d97b7b93ec6cb017ef4f604e74afbb02a9dc5b5a836184ae4ae0c9083d1606"} Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453654 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerID="560fa35246bee850c02eba85ff6dc72c679cf027147448c1f742213ae889ea8c" exitCode=0 Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.453728 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerDied","Data":"560fa35246bee850c02eba85ff6dc72c679cf027147448c1f742213ae889ea8c"} Jan 31 15:16:27 crc kubenswrapper[4735]: I0131 15:16:27.552678 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" path="/var/lib/kubelet/pods/abd7c543-6a9a-4bbc-8162-88dfa7239b61/volumes" Jan 31 15:16:28 crc kubenswrapper[4735]: I0131 15:16:28.219336 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 15:16:28 crc kubenswrapper[4735]: I0131 15:16:28.269539 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 31 15:16:29 crc kubenswrapper[4735]: I0131 15:16:29.479465 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerID="158feb2bf07f38e97a7a23fc4a92df0ecedbb4ccd368e2833bb03a14459f2257" exitCode=0 Jan 31 15:16:29 crc kubenswrapper[4735]: I0131 15:16:29.479778 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerDied","Data":"158feb2bf07f38e97a7a23fc4a92df0ecedbb4ccd368e2833bb03a14459f2257"} Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.116837 4735 scope.go:117] "RemoveContainer" containerID="841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.337609 4735 scope.go:117] "RemoveContainer" containerID="65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.338066 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77\": container with ID starting with 65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77 not found: ID does not exist" containerID="65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.338113 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77"} err="failed to get container status \"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77\": rpc error: code = NotFound desc = could not find container \"65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77\": container with ID starting with 65ccad1b7516832d01a32e96561f57efdf7c924591052c5a49d0f06bed7b8e77 not found: ID does not exist" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.338147 4735 scope.go:117] "RemoveContainer" containerID="841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.338818 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1\": container with ID starting with 841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1 not found: ID does not exist" containerID="841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.338890 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1"} err="failed to get container status \"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1\": rpc error: code = NotFound desc = could not find container \"841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1\": container with ID starting with 841df3f79f004bc77df806f72f48ad53b05f8e800462fcef55293afa8b0d65f1 not found: ID does not exist" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.386370 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.496987 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4ef4c78-5d10-4718-aae5-d10be18e46b8","Type":"ContainerDied","Data":"6973f17867e63d00909af80134a90be87a9654a0e344fce957ff74356ac2b723"} Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.497035 4735 scope.go:117] "RemoveContainer" containerID="179b1aa49ccfe1db3a77cbc8d0f2b5cc08bc044a598e063f077ea132ba4138f5" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.497120 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.501828 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75603184-bd90-47b2-a5e2-c06e0c205001","Type":"ContainerStarted","Data":"4770664a049dbfeb59982a7b7e45fcd85e72b1b90e9fafbdd9d7e0dd79565631"} Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.505541 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f4f968b5f-slb77" event={"ID":"bdf1b1c9-1210-4c8f-beba-1780efc67349","Type":"ContainerStarted","Data":"2ef24602318abac82090060d4814bfe442ed41bc9dc5ba238a19247cfd914035"} Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.505573 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f4f968b5f-slb77" event={"ID":"bdf1b1c9-1210-4c8f-beba-1780efc67349","Type":"ContainerStarted","Data":"71c8ffbe6d26452d6ef1ec83428b546fe7c2c6d2733ccc5301e56271068db9ab"} Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507087 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507261 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5pvw\" (UniqueName: \"kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507342 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507464 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507599 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507715 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.507823 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data\") pod \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\" (UID: \"e4ef4c78-5d10-4718-aae5-d10be18e46b8\") " Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.509503 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.509786 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.515640 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw" (OuterVolumeSpecName: "kube-api-access-k5pvw") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "kube-api-access-k5pvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.519993 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.834947689 podStartE2EDuration="11.51997855s" podCreationTimestamp="2026-01-31 15:16:20 +0000 UTC" firstStartedPulling="2026-01-31 15:16:21.533725339 +0000 UTC m=+1067.307054371" lastFinishedPulling="2026-01-31 15:16:31.21875619 +0000 UTC m=+1076.992085232" observedRunningTime="2026-01-31 15:16:31.518224451 +0000 UTC m=+1077.291553493" watchObservedRunningTime="2026-01-31 15:16:31.51997855 +0000 UTC m=+1077.293307592" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.521126 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts" (OuterVolumeSpecName: "scripts") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.535734 4735 scope.go:117] "RemoveContainer" containerID="40d97b7b93ec6cb017ef4f604e74afbb02a9dc5b5a836184ae4ae0c9083d1606" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.549730 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.572377 4735 scope.go:117] "RemoveContainer" containerID="158feb2bf07f38e97a7a23fc4a92df0ecedbb4ccd368e2833bb03a14459f2257" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.594709 4735 scope.go:117] "RemoveContainer" containerID="560fa35246bee850c02eba85ff6dc72c679cf027147448c1f742213ae889ea8c" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.609789 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.609817 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.609924 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.610240 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4ef4c78-5d10-4718-aae5-d10be18e46b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.610276 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5pvw\" (UniqueName: \"kubernetes.io/projected/e4ef4c78-5d10-4718-aae5-d10be18e46b8-kube-api-access-k5pvw\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.620246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data" (OuterVolumeSpecName: "config-data") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.627585 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ef4c78-5d10-4718-aae5-d10be18e46b8" (UID: "e4ef4c78-5d10-4718-aae5-d10be18e46b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.711988 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.712019 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ef4c78-5d10-4718-aae5-d10be18e46b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.825806 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.835294 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857015 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857460 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-notification-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857483 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-notification-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857500 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857507 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857526 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="sg-core" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857535 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="sg-core" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857548 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="proxy-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857555 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="proxy-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857571 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-central-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857578 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-central-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: E0131 15:16:31.857594 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-api" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857601 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-api" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857811 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-api" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857826 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-central-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857836 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="sg-core" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857855 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="proxy-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857869 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd7c543-6a9a-4bbc-8162-88dfa7239b61" containerName="neutron-httpd" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.857878 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" containerName="ceilometer-notification-agent" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.859417 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.864960 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.865007 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:16:31 crc kubenswrapper[4735]: I0131 15:16:31.877046 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016749 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcq69\" (UniqueName: \"kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016836 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016891 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016917 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016934 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016954 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.016987 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.073784 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:32 crc kubenswrapper[4735]: E0131 15:16:32.074433 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-pcq69 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="f8d7f857-2dce-43e5-8e09-885422f3e11d" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.118971 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120062 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcq69\" (UniqueName: \"kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120658 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120748 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.120896 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.121012 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.121313 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.129164 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.129224 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.130328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.135370 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.149227 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcq69\" (UniqueName: \"kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69\") pod \"ceilometer-0\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.514291 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f4f968b5f-slb77" event={"ID":"bdf1b1c9-1210-4c8f-beba-1780efc67349","Type":"ContainerStarted","Data":"f9e79638d0bdf8511c9d17d04ffb4e6b5070e4c55ed7f43fced175e43f3b4940"} Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.514900 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.515016 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.517456 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.528389 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.542921 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f4f968b5f-slb77" podStartSLOduration=7.542873009 podStartE2EDuration="7.542873009s" podCreationTimestamp="2026-01-31 15:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:32.536961972 +0000 UTC m=+1078.310291014" watchObservedRunningTime="2026-01-31 15:16:32.542873009 +0000 UTC m=+1078.316202081" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733308 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733363 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733557 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733588 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733646 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.733766 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcq69\" (UniqueName: \"kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.734220 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.734489 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.735983 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml\") pod \"f8d7f857-2dce-43e5-8e09-885422f3e11d\" (UID: \"f8d7f857-2dce-43e5-8e09-885422f3e11d\") " Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.736559 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.736594 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8d7f857-2dce-43e5-8e09-885422f3e11d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.748577 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data" (OuterVolumeSpecName: "config-data") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.752249 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.754014 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.754298 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69" (OuterVolumeSpecName: "kube-api-access-pcq69") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "kube-api-access-pcq69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.756576 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts" (OuterVolumeSpecName: "scripts") pod "f8d7f857-2dce-43e5-8e09-885422f3e11d" (UID: "f8d7f857-2dce-43e5-8e09-885422f3e11d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.838602 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.838645 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.838658 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcq69\" (UniqueName: \"kubernetes.io/projected/f8d7f857-2dce-43e5-8e09-885422f3e11d-kube-api-access-pcq69\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.838668 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:32 crc kubenswrapper[4735]: I0131 15:16:32.838676 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8d7f857-2dce-43e5-8e09-885422f3e11d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.532365 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.556877 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ef4c78-5d10-4718-aae5-d10be18e46b8" path="/var/lib/kubelet/pods/e4ef4c78-5d10-4718-aae5-d10be18e46b8/volumes" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.609416 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.618308 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.651927 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.657921 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.661044 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.661682 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.669028 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.758965 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759090 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hz7n\" (UniqueName: \"kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759225 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759273 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.759301 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.860891 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.860956 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861037 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hz7n\" (UniqueName: \"kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861066 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861138 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861185 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861797 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.861814 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.865345 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.865400 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.865605 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.872613 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.902747 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hz7n\" (UniqueName: \"kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n\") pod \"ceilometer-0\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " pod="openstack/ceilometer-0" Jan 31 15:16:33 crc kubenswrapper[4735]: I0131 15:16:33.991893 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:34 crc kubenswrapper[4735]: I0131 15:16:34.472127 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:34 crc kubenswrapper[4735]: W0131 15:16:34.489563 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6678c85_328e_44be_af65_5bd94d43d233.slice/crio-21aee550dd87bc1588269ee0be6c1ce91fc8d5e64151b2b19e0cb17156712de5 WatchSource:0}: Error finding container 21aee550dd87bc1588269ee0be6c1ce91fc8d5e64151b2b19e0cb17156712de5: Status 404 returned error can't find the container with id 21aee550dd87bc1588269ee0be6c1ce91fc8d5e64151b2b19e0cb17156712de5 Jan 31 15:16:34 crc kubenswrapper[4735]: I0131 15:16:34.540626 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerStarted","Data":"21aee550dd87bc1588269ee0be6c1ce91fc8d5e64151b2b19e0cb17156712de5"} Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.197678 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.198148 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-log" containerID="cri-o://0ec439ab2f57d0f14631a62bdac7d049b8efdd63f942251ce9f160accff1e3cb" gracePeriod=30 Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.198224 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-httpd" containerID="cri-o://39d79d7c8770fa2c36d71289a49a0e80bd191daa2d0ff3babf55af4000024967" gracePeriod=30 Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.555996 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d7f857-2dce-43e5-8e09-885422f3e11d" path="/var/lib/kubelet/pods/f8d7f857-2dce-43e5-8e09-885422f3e11d/volumes" Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.556714 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerStarted","Data":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.556739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerDied","Data":"0ec439ab2f57d0f14631a62bdac7d049b8efdd63f942251ce9f160accff1e3cb"} Jan 31 15:16:35 crc kubenswrapper[4735]: I0131 15:16:35.556049 4735 generic.go:334] "Generic (PLEG): container finished" podID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerID="0ec439ab2f57d0f14631a62bdac7d049b8efdd63f942251ce9f160accff1e3cb" exitCode=143 Jan 31 15:16:36 crc kubenswrapper[4735]: I0131 15:16:36.579895 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerStarted","Data":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} Jan 31 15:16:37 crc kubenswrapper[4735]: I0131 15:16:37.589629 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerStarted","Data":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.269952 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f754986cd-gdb8n" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.270359 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.363000 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:41042->10.217.0.150:9292: read: connection reset by peer" Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.363249 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:41040->10.217.0.150:9292: read: connection reset by peer" Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.613977 4735 generic.go:334] "Generic (PLEG): container finished" podID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerID="39d79d7c8770fa2c36d71289a49a0e80bd191daa2d0ff3babf55af4000024967" exitCode=0 Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.614019 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerDied","Data":"39d79d7c8770fa2c36d71289a49a0e80bd191daa2d0ff3babf55af4000024967"} Jan 31 15:16:38 crc kubenswrapper[4735]: I0131 15:16:38.881734 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046502 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046633 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046705 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046781 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046810 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046884 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046932 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.046972 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwx6\" (UniqueName: \"kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6\") pod \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\" (UID: \"15c8f74d-f504-4ddf-a823-12b35f0d65ba\") " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.047840 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs" (OuterVolumeSpecName: "logs") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.048116 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.062170 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6" (OuterVolumeSpecName: "kube-api-access-6nwx6") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "kube-api-access-6nwx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.065863 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts" (OuterVolumeSpecName: "scripts") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.067798 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.090382 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.103020 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data" (OuterVolumeSpecName: "config-data") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.123580 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "15c8f74d-f504-4ddf-a823-12b35f0d65ba" (UID: "15c8f74d-f504-4ddf-a823-12b35f0d65ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149217 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149247 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149263 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149292 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149302 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149311 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15c8f74d-f504-4ddf-a823-12b35f0d65ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149318 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nwx6\" (UniqueName: \"kubernetes.io/projected/15c8f74d-f504-4ddf-a823-12b35f0d65ba-kube-api-access-6nwx6\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.149357 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c8f74d-f504-4ddf-a823-12b35f0d65ba-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.171336 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.251272 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.626516 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15c8f74d-f504-4ddf-a823-12b35f0d65ba","Type":"ContainerDied","Data":"227e962ddb3af152008a8aeea8385742d705877ce4a6abc86d91b8ed552b30c8"} Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.626885 4735 scope.go:117] "RemoveContainer" containerID="39d79d7c8770fa2c36d71289a49a0e80bd191daa2d0ff3babf55af4000024967" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.626614 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.687519 4735 scope.go:117] "RemoveContainer" containerID="0ec439ab2f57d0f14631a62bdac7d049b8efdd63f942251ce9f160accff1e3cb" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.718575 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.741513 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.749243 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:39 crc kubenswrapper[4735]: E0131 15:16:39.749691 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-log" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.749714 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-log" Jan 31 15:16:39 crc kubenswrapper[4735]: E0131 15:16:39.749752 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-httpd" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.749761 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-httpd" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.749940 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-httpd" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.749966 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" containerName="glance-log" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.750862 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.753031 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.753978 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.760552 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.887882 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-855nq\" (UniqueName: \"kubernetes.io/projected/5620f33b-a10a-41ae-a9f2-707f94ebbe59-kube-api-access-855nq\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.887930 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-logs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.887947 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-scripts\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.888065 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.888224 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.888325 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.888370 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-config-data\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.888430 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990234 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-855nq\" (UniqueName: \"kubernetes.io/projected/5620f33b-a10a-41ae-a9f2-707f94ebbe59-kube-api-access-855nq\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990527 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-logs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990609 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-scripts\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990723 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990839 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990928 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.990992 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-logs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.991010 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.991152 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-config-data\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.991213 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5620f33b-a10a-41ae-a9f2-707f94ebbe59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.991298 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.995916 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.995934 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:39 crc kubenswrapper[4735]: I0131 15:16:39.996546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-scripts\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.005810 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5620f33b-a10a-41ae-a9f2-707f94ebbe59-config-data\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.015087 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-855nq\" (UniqueName: \"kubernetes.io/projected/5620f33b-a10a-41ae-a9f2-707f94ebbe59-kube-api-access-855nq\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.023723 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5620f33b-a10a-41ae-a9f2-707f94ebbe59\") " pod="openstack/glance-default-external-api-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.083750 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.635431 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerStarted","Data":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.636771 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.669652 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.625182991 podStartE2EDuration="7.669636427s" podCreationTimestamp="2026-01-31 15:16:33 +0000 UTC" firstStartedPulling="2026-01-31 15:16:34.492610853 +0000 UTC m=+1080.265939895" lastFinishedPulling="2026-01-31 15:16:39.537064289 +0000 UTC m=+1085.310393331" observedRunningTime="2026-01-31 15:16:40.665578192 +0000 UTC m=+1086.438907244" watchObservedRunningTime="2026-01-31 15:16:40.669636427 +0000 UTC m=+1086.442965469" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.788473 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.788913 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f4f968b5f-slb77" Jan 31 15:16:40 crc kubenswrapper[4735]: I0131 15:16:40.806017 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.095519 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.095782 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-log" containerID="cri-o://8e3fa09ea9e28525aa336edf5d5754a51831f07f43133acf1a19f456f601fd93" gracePeriod=30 Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.096201 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-httpd" containerID="cri-o://9c8ac6ac238b16eeb490fab3f29c77bc494a1e409869a8306478eed4d0d6219e" gracePeriod=30 Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.570594 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c8f74d-f504-4ddf-a823-12b35f0d65ba" path="/var/lib/kubelet/pods/15c8f74d-f504-4ddf-a823-12b35f0d65ba/volumes" Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.657957 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5620f33b-a10a-41ae-a9f2-707f94ebbe59","Type":"ContainerStarted","Data":"e04b5aabff747590b775efd9193ca8673ef46c91e7f4df8aa5eef744fb0b3f5f"} Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.657995 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5620f33b-a10a-41ae-a9f2-707f94ebbe59","Type":"ContainerStarted","Data":"6bfbea0d6defd5f02df6d8b4cdbdeca84ebc5a13090bc0da5eb0136d04fd9b43"} Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.659743 4735 generic.go:334] "Generic (PLEG): container finished" podID="afe1754f-14c1-4a91-b342-3046a183454e" containerID="8e3fa09ea9e28525aa336edf5d5754a51831f07f43133acf1a19f456f601fd93" exitCode=143 Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.660249 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerDied","Data":"8e3fa09ea9e28525aa336edf5d5754a51831f07f43133acf1a19f456f601fd93"} Jan 31 15:16:41 crc kubenswrapper[4735]: I0131 15:16:41.698371 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.671138 4735 generic.go:334] "Generic (PLEG): container finished" podID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerID="c66d6894f5d7b8ef37aaaed37239780e27cdf90c09aed9df2f36f26fa6784c64" exitCode=137 Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.671493 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerDied","Data":"c66d6894f5d7b8ef37aaaed37239780e27cdf90c09aed9df2f36f26fa6784c64"} Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.676809 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5620f33b-a10a-41ae-a9f2-707f94ebbe59","Type":"ContainerStarted","Data":"050f717eb9fad783f82586431b31d85c2fd42d80ea1e7925802eedd12d6693df"} Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.722742 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.72271924 podStartE2EDuration="3.72271924s" podCreationTimestamp="2026-01-31 15:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:42.708836658 +0000 UTC m=+1088.482165710" watchObservedRunningTime="2026-01-31 15:16:42.72271924 +0000 UTC m=+1088.496048282" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.843808 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.945585 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.945802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd4ks\" (UniqueName: \"kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.945967 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.946083 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.946191 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.946280 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.946363 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data\") pod \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\" (UID: \"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac\") " Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.948038 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs" (OuterVolumeSpecName: "logs") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.951706 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.952193 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks" (OuterVolumeSpecName: "kube-api-access-nd4ks") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "kube-api-access-nd4ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.970727 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts" (OuterVolumeSpecName: "scripts") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.978556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:42 crc kubenswrapper[4735]: I0131 15:16:42.983665 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data" (OuterVolumeSpecName: "config-data") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:42.999715 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" (UID: "aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048425 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048569 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048587 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048598 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048610 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048621 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.048635 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd4ks\" (UniqueName: \"kubernetes.io/projected/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac-kube-api-access-nd4ks\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.684594 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f754986cd-gdb8n" event={"ID":"aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac","Type":"ContainerDied","Data":"69f1566db94a372fdc33ca61ec6987358f85ef982a06385d45ac368c1e3c91ba"} Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.684926 4735 scope.go:117] "RemoveContainer" containerID="502456fd1de1026a07431dbd3dae3b054005bc33fecf428146696f57607d0db7" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.684940 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="proxy-httpd" containerID="cri-o://98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" gracePeriod=30 Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.684613 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f754986cd-gdb8n" Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.684933 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-central-agent" containerID="cri-o://283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" gracePeriod=30 Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.685043 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="sg-core" containerID="cri-o://cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" gracePeriod=30 Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.685049 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-notification-agent" containerID="cri-o://d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" gracePeriod=30 Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.714757 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.722654 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f754986cd-gdb8n"] Jan 31 15:16:43 crc kubenswrapper[4735]: I0131 15:16:43.855888 4735 scope.go:117] "RemoveContainer" containerID="c66d6894f5d7b8ef37aaaed37239780e27cdf90c09aed9df2f36f26fa6784c64" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.441552 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.574986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575276 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575334 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575410 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575788 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575834 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.575935 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hz7n\" (UniqueName: \"kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n\") pod \"c6678c85-328e-44be-af65-5bd94d43d233\" (UID: \"c6678c85-328e-44be-af65-5bd94d43d233\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.576297 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.576412 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.576855 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.576875 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6678c85-328e-44be-af65-5bd94d43d233-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.581801 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts" (OuterVolumeSpecName: "scripts") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.590684 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n" (OuterVolumeSpecName: "kube-api-access-6hz7n") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "kube-api-access-6hz7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.620051 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.665578 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.682454 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.682480 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hz7n\" (UniqueName: \"kubernetes.io/projected/c6678c85-328e-44be-af65-5bd94d43d233-kube-api-access-6hz7n\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.682492 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.682500 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.704501 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data" (OuterVolumeSpecName: "config-data") pod "c6678c85-328e-44be-af65-5bd94d43d233" (UID: "c6678c85-328e-44be-af65-5bd94d43d233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.706663 4735 generic.go:334] "Generic (PLEG): container finished" podID="afe1754f-14c1-4a91-b342-3046a183454e" containerID="9c8ac6ac238b16eeb490fab3f29c77bc494a1e409869a8306478eed4d0d6219e" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.706775 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerDied","Data":"9c8ac6ac238b16eeb490fab3f29c77bc494a1e409869a8306478eed4d0d6219e"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715049 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6678c85-328e-44be-af65-5bd94d43d233" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715090 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6678c85-328e-44be-af65-5bd94d43d233" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" exitCode=2 Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715099 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6678c85-328e-44be-af65-5bd94d43d233" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715110 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6678c85-328e-44be-af65-5bd94d43d233" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715131 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerDied","Data":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715161 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerDied","Data":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715175 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerDied","Data":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715186 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerDied","Data":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6678c85-328e-44be-af65-5bd94d43d233","Type":"ContainerDied","Data":"21aee550dd87bc1588269ee0be6c1ce91fc8d5e64151b2b19e0cb17156712de5"} Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715213 4735 scope.go:117] "RemoveContainer" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.715339 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.741718 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.763904 4735 scope.go:117] "RemoveContainer" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.772504 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.784523 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6678c85-328e-44be-af65-5bd94d43d233-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.799628 4735 scope.go:117] "RemoveContainer" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.799795 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818435 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818786 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-notification-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818803 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-notification-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818817 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon-log" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818824 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon-log" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818836 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-log" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818842 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-log" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818849 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="sg-core" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818855 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="sg-core" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818869 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818874 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818884 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-central-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818890 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-central-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818904 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="proxy-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818909 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="proxy-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: E0131 15:16:44.818924 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.818930 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819070 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819080 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-notification-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819118 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="proxy-httpd" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819126 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819136 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1754f-14c1-4a91-b342-3046a183454e" containerName="glance-log" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819145 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="ceilometer-central-agent" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819155 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" containerName="horizon-log" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.819165 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6678c85-328e-44be-af65-5bd94d43d233" containerName="sg-core" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.820697 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.832584 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.832904 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.842115 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.856595 4735 scope.go:117] "RemoveContainer" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888020 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888082 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888108 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888129 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888174 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888291 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888311 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvxb\" (UniqueName: \"kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.888382 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts\") pod \"afe1754f-14c1-4a91-b342-3046a183454e\" (UID: \"afe1754f-14c1-4a91-b342-3046a183454e\") " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.889479 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs" (OuterVolumeSpecName: "logs") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.889880 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.893288 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts" (OuterVolumeSpecName: "scripts") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.893489 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.894337 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.897292 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb" (OuterVolumeSpecName: "kube-api-access-fjvxb") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "kube-api-access-fjvxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.924687 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.937387 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data" (OuterVolumeSpecName: "config-data") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.949650 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "afe1754f-14c1-4a91-b342-3046a183454e" (UID: "afe1754f-14c1-4a91-b342-3046a183454e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991602 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991760 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991780 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9vp\" (UniqueName: \"kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991806 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991844 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991893 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991905 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjvxb\" (UniqueName: \"kubernetes.io/projected/afe1754f-14c1-4a91-b342-3046a183454e-kube-api-access-fjvxb\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991915 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991933 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991942 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991950 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afe1754f-14c1-4a91-b342-3046a183454e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:44 crc kubenswrapper[4735]: I0131 15:16:44.991958 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe1754f-14c1-4a91-b342-3046a183454e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.009782 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.028464 4735 scope.go:117] "RemoveContainer" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:45 crc kubenswrapper[4735]: E0131 15:16:45.028972 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": container with ID starting with 98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3 not found: ID does not exist" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029027 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} err="failed to get container status \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": rpc error: code = NotFound desc = could not find container \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": container with ID starting with 98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029055 4735 scope.go:117] "RemoveContainer" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:45 crc kubenswrapper[4735]: E0131 15:16:45.029449 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": container with ID starting with cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec not found: ID does not exist" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029476 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} err="failed to get container status \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": rpc error: code = NotFound desc = could not find container \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": container with ID starting with cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029497 4735 scope.go:117] "RemoveContainer" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:45 crc kubenswrapper[4735]: E0131 15:16:45.029726 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": container with ID starting with d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee not found: ID does not exist" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029761 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} err="failed to get container status \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": rpc error: code = NotFound desc = could not find container \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": container with ID starting with d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.029781 4735 scope.go:117] "RemoveContainer" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:45 crc kubenswrapper[4735]: E0131 15:16:45.030009 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": container with ID starting with 283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3 not found: ID does not exist" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030029 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} err="failed to get container status \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": rpc error: code = NotFound desc = could not find container \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": container with ID starting with 283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030041 4735 scope.go:117] "RemoveContainer" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030480 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} err="failed to get container status \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": rpc error: code = NotFound desc = could not find container \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": container with ID starting with 98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030500 4735 scope.go:117] "RemoveContainer" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030761 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} err="failed to get container status \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": rpc error: code = NotFound desc = could not find container \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": container with ID starting with cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.030869 4735 scope.go:117] "RemoveContainer" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031151 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} err="failed to get container status \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": rpc error: code = NotFound desc = could not find container \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": container with ID starting with d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031169 4735 scope.go:117] "RemoveContainer" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031466 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} err="failed to get container status \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": rpc error: code = NotFound desc = could not find container \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": container with ID starting with 283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031562 4735 scope.go:117] "RemoveContainer" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031880 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} err="failed to get container status \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": rpc error: code = NotFound desc = could not find container \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": container with ID starting with 98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.031914 4735 scope.go:117] "RemoveContainer" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.032174 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} err="failed to get container status \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": rpc error: code = NotFound desc = could not find container \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": container with ID starting with cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.032195 4735 scope.go:117] "RemoveContainer" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.034507 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} err="failed to get container status \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": rpc error: code = NotFound desc = could not find container \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": container with ID starting with d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.034632 4735 scope.go:117] "RemoveContainer" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.035651 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} err="failed to get container status \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": rpc error: code = NotFound desc = could not find container \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": container with ID starting with 283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.035674 4735 scope.go:117] "RemoveContainer" containerID="98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.035852 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3"} err="failed to get container status \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": rpc error: code = NotFound desc = could not find container \"98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3\": container with ID starting with 98ee0f32d646ef6816b8b6b85665c48e633e6e411a44981f83b0017057b312c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.035942 4735 scope.go:117] "RemoveContainer" containerID="cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.037268 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec"} err="failed to get container status \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": rpc error: code = NotFound desc = could not find container \"cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec\": container with ID starting with cc3eaeed36a4e45ebe411ed32a52234bce17aaf96c87f7e20e7128b39b8107ec not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.037292 4735 scope.go:117] "RemoveContainer" containerID="d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.037656 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee"} err="failed to get container status \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": rpc error: code = NotFound desc = could not find container \"d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee\": container with ID starting with d9b30deda02f0f9ac20701a631aae99ac76b072e815a9c11b3025b394c971bee not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.037742 4735 scope.go:117] "RemoveContainer" containerID="283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.042883 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3"} err="failed to get container status \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": rpc error: code = NotFound desc = could not find container \"283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3\": container with ID starting with 283134baa888e6637c749d7ec5a2564939db486d4552896c4834332ebba3f2c3 not found: ID does not exist" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093338 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093629 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093700 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9vp\" (UniqueName: \"kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093775 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093853 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093921 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.093998 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.094110 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.094123 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.094376 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.097336 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.097947 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.098050 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.101052 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.120089 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9vp\" (UniqueName: \"kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp\") pod \"ceilometer-0\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.162035 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.550220 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac" path="/var/lib/kubelet/pods/aeacd169-f0d1-4ad7-9c8b-4c08ecb4c9ac/volumes" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.551460 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6678c85-328e-44be-af65-5bd94d43d233" path="/var/lib/kubelet/pods/c6678c85-328e-44be-af65-5bd94d43d233/volumes" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.631014 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.724718 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerStarted","Data":"963057cff1abba66ab269165f34b7b83fe7d3d8bbf2dc666eec01f0972bcd446"} Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.726918 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe1754f-14c1-4a91-b342-3046a183454e","Type":"ContainerDied","Data":"78586c1c6ef2809a6630598829906e8c624230af76a454ccda96172273c260cf"} Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.726965 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.726988 4735 scope.go:117] "RemoveContainer" containerID="9c8ac6ac238b16eeb490fab3f29c77bc494a1e409869a8306478eed4d0d6219e" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.755672 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.762769 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.764380 4735 scope.go:117] "RemoveContainer" containerID="8e3fa09ea9e28525aa336edf5d5754a51831f07f43133acf1a19f456f601fd93" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.786065 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.787553 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.804126 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.804134 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.830772 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908220 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908241 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908262 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908307 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908372 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:45 crc kubenswrapper[4735]: I0131 15:16:45.908394 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4w7p\" (UniqueName: \"kubernetes.io/projected/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-kube-api-access-f4w7p\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010356 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010407 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010511 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010549 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4w7p\" (UniqueName: \"kubernetes.io/projected/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-kube-api-access-f4w7p\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010623 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010655 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010679 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.010706 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.011037 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.011405 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.011674 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.016289 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.016574 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.030170 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.036866 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.051485 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.052603 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4w7p\" (UniqueName: \"kubernetes.io/projected/fdceee3d-5a28-4c46-bd6e-40048cdd56c9-kube-api-access-f4w7p\") pod \"glance-default-internal-api-0\" (UID: \"fdceee3d-5a28-4c46-bd6e-40048cdd56c9\") " pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.121517 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.604619 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.744645 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerStarted","Data":"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756"} Jan 31 15:16:46 crc kubenswrapper[4735]: I0131 15:16:46.763949 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 15:16:46 crc kubenswrapper[4735]: W0131 15:16:46.775731 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdceee3d_5a28_4c46_bd6e_40048cdd56c9.slice/crio-8db1828a0f5513a3cd2130dd9dfeb803452bd235de53af4178f7150d684ab386 WatchSource:0}: Error finding container 8db1828a0f5513a3cd2130dd9dfeb803452bd235de53af4178f7150d684ab386: Status 404 returned error can't find the container with id 8db1828a0f5513a3cd2130dd9dfeb803452bd235de53af4178f7150d684ab386 Jan 31 15:16:47 crc kubenswrapper[4735]: I0131 15:16:47.555884 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe1754f-14c1-4a91-b342-3046a183454e" path="/var/lib/kubelet/pods/afe1754f-14c1-4a91-b342-3046a183454e/volumes" Jan 31 15:16:47 crc kubenswrapper[4735]: I0131 15:16:47.809274 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerStarted","Data":"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d"} Jan 31 15:16:47 crc kubenswrapper[4735]: I0131 15:16:47.809325 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerStarted","Data":"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015"} Jan 31 15:16:47 crc kubenswrapper[4735]: I0131 15:16:47.815398 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdceee3d-5a28-4c46-bd6e-40048cdd56c9","Type":"ContainerStarted","Data":"98597c88da20107a8f6d131b84bdac893c2d6106b3b3780c64384906fe0423f5"} Jan 31 15:16:47 crc kubenswrapper[4735]: I0131 15:16:47.815509 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdceee3d-5a28-4c46-bd6e-40048cdd56c9","Type":"ContainerStarted","Data":"8db1828a0f5513a3cd2130dd9dfeb803452bd235de53af4178f7150d684ab386"} Jan 31 15:16:48 crc kubenswrapper[4735]: I0131 15:16:48.833744 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdceee3d-5a28-4c46-bd6e-40048cdd56c9","Type":"ContainerStarted","Data":"f479648d89212974890057141d2cda6d95e579ad2d00465f53d2b34e65aa764d"} Jan 31 15:16:48 crc kubenswrapper[4735]: I0131 15:16:48.870264 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.8702399290000002 podStartE2EDuration="3.870239929s" podCreationTimestamp="2026-01-31 15:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:48.861516343 +0000 UTC m=+1094.634845435" watchObservedRunningTime="2026-01-31 15:16:48.870239929 +0000 UTC m=+1094.643568991" Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.874277 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-m75l9"] Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.875528 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.883212 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m75l9"] Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.978397 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6hwhn"] Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.979612 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.988882 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hwhn"] Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.997747 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e898-account-create-update-bbb72"] Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.998886 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.999159 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sgrk\" (UniqueName: \"kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:49 crc kubenswrapper[4735]: I0131 15:16:49.999790 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.002116 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.023197 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e898-account-create-update-bbb72"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.084874 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.085158 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.100869 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.100983 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrm9\" (UniqueName: \"kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.101042 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.101112 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sgrk\" (UniqueName: \"kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.101140 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.101167 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mp4\" (UniqueName: \"kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.101984 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.117979 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.120119 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sgrk\" (UniqueName: \"kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk\") pod \"nova-api-db-create-m75l9\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.121432 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.192208 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zp7s7"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.193415 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.199540 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a96c-account-create-update-lkt5p"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.200962 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.202338 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.202448 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.202470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mp4\" (UniqueName: \"kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.202603 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrm9\" (UniqueName: \"kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.202782 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.203585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.204317 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.205512 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.209696 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zp7s7"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.230628 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mp4\" (UniqueName: \"kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4\") pod \"nova-api-e898-account-create-update-bbb72\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.237000 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrm9\" (UniqueName: \"kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9\") pod \"nova-cell0-db-create-6hwhn\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.258866 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a96c-account-create-update-lkt5p"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.296356 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.305745 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.305810 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cwqn\" (UniqueName: \"kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.305843 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.305929 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqfl\" (UniqueName: \"kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.333833 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.389869 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5ebc-account-create-update-pxltn"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.397628 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5ebc-account-create-update-pxltn"] Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.397735 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.401384 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411081 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411131 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gb7\" (UniqueName: \"kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411169 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cwqn\" (UniqueName: \"kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411198 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.411312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqfl\" (UniqueName: \"kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.412507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.421292 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.432624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cwqn\" (UniqueName: \"kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn\") pod \"nova-cell1-db-create-zp7s7\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.434126 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqfl\" (UniqueName: \"kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl\") pod \"nova-cell0-a96c-account-create-update-lkt5p\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.514372 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gb7\" (UniqueName: \"kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.515003 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.515261 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.517227 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.526988 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.535747 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gb7\" (UniqueName: \"kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7\") pod \"nova-cell1-5ebc-account-create-update-pxltn\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.732410 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m75l9"] Jan 31 15:16:50 crc kubenswrapper[4735]: W0131 15:16:50.758498 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b49ae1c_9cf5_4e9d_9142_502444638432.slice/crio-277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707 WatchSource:0}: Error finding container 277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707: Status 404 returned error can't find the container with id 277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707 Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.764185 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.866558 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m75l9" event={"ID":"5b49ae1c-9cf5-4e9d-9142-502444638432","Type":"ContainerStarted","Data":"277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707"} Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.867387 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.867416 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 15:16:50 crc kubenswrapper[4735]: I0131 15:16:50.943243 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hwhn"] Jan 31 15:16:50 crc kubenswrapper[4735]: W0131 15:16:50.954115 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda663199a_85bc_4931_bac4_6d060201ac38.slice/crio-9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076 WatchSource:0}: Error finding container 9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076: Status 404 returned error can't find the container with id 9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.063556 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e898-account-create-update-bbb72"] Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.212560 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a96c-account-create-update-lkt5p"] Jan 31 15:16:51 crc kubenswrapper[4735]: W0131 15:16:51.218368 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bf75c5_68b8_41d5_8815_6f6120f2271c.slice/crio-e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140 WatchSource:0}: Error finding container e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140: Status 404 returned error can't find the container with id e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140 Jan 31 15:16:51 crc kubenswrapper[4735]: W0131 15:16:51.221509 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd54b2315_f6e7_4c5a_ab66_51d808de8aa1.slice/crio-96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152 WatchSource:0}: Error finding container 96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152: Status 404 returned error can't find the container with id 96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.231513 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zp7s7"] Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.351017 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5ebc-account-create-update-pxltn"] Jan 31 15:16:51 crc kubenswrapper[4735]: W0131 15:16:51.368402 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b3a8e3_173f_4596_9712_ef4f7c324113.slice/crio-619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf WatchSource:0}: Error finding container 619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf: Status 404 returned error can't find the container with id 619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.884656 4735 generic.go:334] "Generic (PLEG): container finished" podID="a663199a-85bc-4931-bac4-6d060201ac38" containerID="158bbd6a9ec5510ff12ebc71df1ec14cf8e726de4aefcb76c3773e47f76ea832" exitCode=0 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.884979 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hwhn" event={"ID":"a663199a-85bc-4931-bac4-6d060201ac38","Type":"ContainerDied","Data":"158bbd6a9ec5510ff12ebc71df1ec14cf8e726de4aefcb76c3773e47f76ea832"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.885007 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hwhn" event={"ID":"a663199a-85bc-4931-bac4-6d060201ac38","Type":"ContainerStarted","Data":"9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.898872 4735 generic.go:334] "Generic (PLEG): container finished" podID="5b49ae1c-9cf5-4e9d-9142-502444638432" containerID="eadc9a029ace25a7b082c1c5e3179d31bdba1b9f4ad148adfabe34457fa606e9" exitCode=0 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.898952 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m75l9" event={"ID":"5b49ae1c-9cf5-4e9d-9142-502444638432","Type":"ContainerDied","Data":"eadc9a029ace25a7b082c1c5e3179d31bdba1b9f4ad148adfabe34457fa606e9"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.913399 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" event={"ID":"c8b3a8e3-173f-4596-9712-ef4f7c324113","Type":"ContainerStarted","Data":"a8fbbb84d4cf75b4d760b529f62865ce263641943d6418088c1d70ec81746a83"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.913459 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" event={"ID":"c8b3a8e3-173f-4596-9712-ef4f7c324113","Type":"ContainerStarted","Data":"619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.919644 4735 generic.go:334] "Generic (PLEG): container finished" podID="d54b2315-f6e7-4c5a-ab66-51d808de8aa1" containerID="c9d435fe0af85102cbe325d3dd26f08f5cbae1a04ed65f183d450d140b0df0b0" exitCode=0 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.919734 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zp7s7" event={"ID":"d54b2315-f6e7-4c5a-ab66-51d808de8aa1","Type":"ContainerDied","Data":"c9d435fe0af85102cbe325d3dd26f08f5cbae1a04ed65f183d450d140b0df0b0"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.919763 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zp7s7" event={"ID":"d54b2315-f6e7-4c5a-ab66-51d808de8aa1","Type":"ContainerStarted","Data":"96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.942306 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e898-account-create-update-bbb72" event={"ID":"3f8d5779-64d7-431f-9202-eaa876df3de4","Type":"ContainerStarted","Data":"9febf6313c5438c533888751d30df54fb9e7a58f4d797198ca5a69be53f74ac7"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.942349 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e898-account-create-update-bbb72" event={"ID":"3f8d5779-64d7-431f-9202-eaa876df3de4","Type":"ContainerStarted","Data":"a1c71e0f84e5010adacdeae8945dbbf045033426ab16ace58550957ff7fd198b"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.968725 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" event={"ID":"b2bf75c5-68b8-41d5-8815-6f6120f2271c","Type":"ContainerStarted","Data":"fd3ae525f3b50033fe4b071ae77295b9f5f80da46513d9ce08db8542309652a8"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.968768 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" event={"ID":"b2bf75c5-68b8-41d5-8815-6f6120f2271c","Type":"ContainerStarted","Data":"e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.970489 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" podStartSLOduration=1.970480297 podStartE2EDuration="1.970480297s" podCreationTimestamp="2026-01-31 15:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:51.969966722 +0000 UTC m=+1097.743295774" watchObservedRunningTime="2026-01-31 15:16:51.970480297 +0000 UTC m=+1097.743809339" Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.981988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerStarted","Data":"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff"} Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.982297 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-central-agent" containerID="cri-o://9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756" gracePeriod=30 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.982512 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.982521 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="sg-core" containerID="cri-o://d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d" gracePeriod=30 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.982556 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-notification-agent" containerID="cri-o://5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015" gracePeriod=30 Jan 31 15:16:51 crc kubenswrapper[4735]: I0131 15:16:51.982581 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="proxy-httpd" containerID="cri-o://dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff" gracePeriod=30 Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.052336 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.682920882 podStartE2EDuration="8.052320709s" podCreationTimestamp="2026-01-31 15:16:44 +0000 UTC" firstStartedPulling="2026-01-31 15:16:45.639673379 +0000 UTC m=+1091.413002431" lastFinishedPulling="2026-01-31 15:16:51.009073216 +0000 UTC m=+1096.782402258" observedRunningTime="2026-01-31 15:16:52.039700213 +0000 UTC m=+1097.813029275" watchObservedRunningTime="2026-01-31 15:16:52.052320709 +0000 UTC m=+1097.825649751" Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.064840 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e898-account-create-update-bbb72" podStartSLOduration=3.0648257819999998 podStartE2EDuration="3.064825782s" podCreationTimestamp="2026-01-31 15:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:52.060641604 +0000 UTC m=+1097.833970646" watchObservedRunningTime="2026-01-31 15:16:52.064825782 +0000 UTC m=+1097.838154824" Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.993619 4735 generic.go:334] "Generic (PLEG): container finished" podID="3f8d5779-64d7-431f-9202-eaa876df3de4" containerID="9febf6313c5438c533888751d30df54fb9e7a58f4d797198ca5a69be53f74ac7" exitCode=0 Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.993869 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e898-account-create-update-bbb72" event={"ID":"3f8d5779-64d7-431f-9202-eaa876df3de4","Type":"ContainerDied","Data":"9febf6313c5438c533888751d30df54fb9e7a58f4d797198ca5a69be53f74ac7"} Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.997835 4735 generic.go:334] "Generic (PLEG): container finished" podID="b2bf75c5-68b8-41d5-8815-6f6120f2271c" containerID="fd3ae525f3b50033fe4b071ae77295b9f5f80da46513d9ce08db8542309652a8" exitCode=0 Jan 31 15:16:52 crc kubenswrapper[4735]: I0131 15:16:52.997900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" event={"ID":"b2bf75c5-68b8-41d5-8815-6f6120f2271c","Type":"ContainerDied","Data":"fd3ae525f3b50033fe4b071ae77295b9f5f80da46513d9ce08db8542309652a8"} Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.002867 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023751 4735 generic.go:334] "Generic (PLEG): container finished" podID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerID="dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff" exitCode=0 Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023787 4735 generic.go:334] "Generic (PLEG): container finished" podID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerID="d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d" exitCode=2 Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023817 4735 generic.go:334] "Generic (PLEG): container finished" podID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerID="5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015" exitCode=0 Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023871 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerDied","Data":"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff"} Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerDied","Data":"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d"} Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.023909 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerDied","Data":"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015"} Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.026700 4735 generic.go:334] "Generic (PLEG): container finished" podID="c8b3a8e3-173f-4596-9712-ef4f7c324113" containerID="a8fbbb84d4cf75b4d760b529f62865ce263641943d6418088c1d70ec81746a83" exitCode=0 Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.026926 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" event={"ID":"c8b3a8e3-173f-4596-9712-ef4f7c324113","Type":"ContainerDied","Data":"a8fbbb84d4cf75b4d760b529f62865ce263641943d6418088c1d70ec81746a83"} Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.027119 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.125532 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.475027 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.572572 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts\") pod \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.572910 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slqfl\" (UniqueName: \"kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl\") pod \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\" (UID: \"b2bf75c5-68b8-41d5-8815-6f6120f2271c\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.573210 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2bf75c5-68b8-41d5-8815-6f6120f2271c" (UID: "b2bf75c5-68b8-41d5-8815-6f6120f2271c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.573547 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bf75c5-68b8-41d5-8815-6f6120f2271c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.578677 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl" (OuterVolumeSpecName: "kube-api-access-slqfl") pod "b2bf75c5-68b8-41d5-8815-6f6120f2271c" (UID: "b2bf75c5-68b8-41d5-8815-6f6120f2271c"). InnerVolumeSpecName "kube-api-access-slqfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.663999 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.673806 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.674990 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slqfl\" (UniqueName: \"kubernetes.io/projected/b2bf75c5-68b8-41d5-8815-6f6120f2271c-kube-api-access-slqfl\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.686182 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.775979 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts\") pod \"5b49ae1c-9cf5-4e9d-9142-502444638432\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776065 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sgrk\" (UniqueName: \"kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk\") pod \"5b49ae1c-9cf5-4e9d-9142-502444638432\" (UID: \"5b49ae1c-9cf5-4e9d-9142-502444638432\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776149 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cwqn\" (UniqueName: \"kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn\") pod \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776235 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts\") pod \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\" (UID: \"d54b2315-f6e7-4c5a-ab66-51d808de8aa1\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776289 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrm9\" (UniqueName: \"kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9\") pod \"a663199a-85bc-4931-bac4-6d060201ac38\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776534 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b49ae1c-9cf5-4e9d-9142-502444638432" (UID: "5b49ae1c-9cf5-4e9d-9142-502444638432"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776651 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54b2315-f6e7-4c5a-ab66-51d808de8aa1" (UID: "d54b2315-f6e7-4c5a-ab66-51d808de8aa1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776335 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts\") pod \"a663199a-85bc-4931-bac4-6d060201ac38\" (UID: \"a663199a-85bc-4931-bac4-6d060201ac38\") " Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.776919 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a663199a-85bc-4931-bac4-6d060201ac38" (UID: "a663199a-85bc-4931-bac4-6d060201ac38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.777217 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a663199a-85bc-4931-bac4-6d060201ac38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.777236 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b49ae1c-9cf5-4e9d-9142-502444638432-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.777245 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.779888 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9" (OuterVolumeSpecName: "kube-api-access-vnrm9") pod "a663199a-85bc-4931-bac4-6d060201ac38" (UID: "a663199a-85bc-4931-bac4-6d060201ac38"). InnerVolumeSpecName "kube-api-access-vnrm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.780048 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn" (OuterVolumeSpecName: "kube-api-access-4cwqn") pod "d54b2315-f6e7-4c5a-ab66-51d808de8aa1" (UID: "d54b2315-f6e7-4c5a-ab66-51d808de8aa1"). InnerVolumeSpecName "kube-api-access-4cwqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.782530 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk" (OuterVolumeSpecName: "kube-api-access-8sgrk") pod "5b49ae1c-9cf5-4e9d-9142-502444638432" (UID: "5b49ae1c-9cf5-4e9d-9142-502444638432"). InnerVolumeSpecName "kube-api-access-8sgrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.878672 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cwqn\" (UniqueName: \"kubernetes.io/projected/d54b2315-f6e7-4c5a-ab66-51d808de8aa1-kube-api-access-4cwqn\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.879040 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrm9\" (UniqueName: \"kubernetes.io/projected/a663199a-85bc-4931-bac4-6d060201ac38-kube-api-access-vnrm9\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:53 crc kubenswrapper[4735]: I0131 15:16:53.879135 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sgrk\" (UniqueName: \"kubernetes.io/projected/5b49ae1c-9cf5-4e9d-9142-502444638432-kube-api-access-8sgrk\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.037120 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.037083 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a96c-account-create-update-lkt5p" event={"ID":"b2bf75c5-68b8-41d5-8815-6f6120f2271c","Type":"ContainerDied","Data":"e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140"} Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.038561 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a964fc44165ce16022a10c99720d6c66598e9eeea94cb55678fdb4aca1d140" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.039221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hwhn" event={"ID":"a663199a-85bc-4931-bac4-6d060201ac38","Type":"ContainerDied","Data":"9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076"} Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.039261 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9667031e6b13663f08bf8a411de8d123280b4c01070be9a1fa740c59c4309076" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.039226 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hwhn" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.040908 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m75l9" event={"ID":"5b49ae1c-9cf5-4e9d-9142-502444638432","Type":"ContainerDied","Data":"277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707"} Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.040947 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="277e7f670c5d171abb8c4003500ee6146df154d304f2e7841ee780ade113c707" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.040957 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m75l9" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.042152 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zp7s7" event={"ID":"d54b2315-f6e7-4c5a-ab66-51d808de8aa1","Type":"ContainerDied","Data":"96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152"} Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.042206 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96774d01af48998dbfd0cde4d9e4f93c8e8b7ba8ac6a79be7e26e4765990d152" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.042243 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zp7s7" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.491749 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.598753 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78gb7\" (UniqueName: \"kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7\") pod \"c8b3a8e3-173f-4596-9712-ef4f7c324113\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.598802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts\") pod \"c8b3a8e3-173f-4596-9712-ef4f7c324113\" (UID: \"c8b3a8e3-173f-4596-9712-ef4f7c324113\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.599634 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b3a8e3-173f-4596-9712-ef4f7c324113" (UID: "c8b3a8e3-173f-4596-9712-ef4f7c324113"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.603404 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7" (OuterVolumeSpecName: "kube-api-access-78gb7") pod "c8b3a8e3-173f-4596-9712-ef4f7c324113" (UID: "c8b3a8e3-173f-4596-9712-ef4f7c324113"). InnerVolumeSpecName "kube-api-access-78gb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.666423 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.671815 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.700587 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78gb7\" (UniqueName: \"kubernetes.io/projected/c8b3a8e3-173f-4596-9712-ef4f7c324113-kube-api-access-78gb7\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.700618 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3a8e3-173f-4596-9712-ef4f7c324113-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802126 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9vp\" (UniqueName: \"kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802200 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802240 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802299 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86mp4\" (UniqueName: \"kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4\") pod \"3f8d5779-64d7-431f-9202-eaa876df3de4\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802350 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802416 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802605 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802663 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts\") pod \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\" (UID: \"bf285002-7308-4a9f-8c0c-ad21ae0b0667\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.802697 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts\") pod \"3f8d5779-64d7-431f-9202-eaa876df3de4\" (UID: \"3f8d5779-64d7-431f-9202-eaa876df3de4\") " Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.803038 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.803174 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.803385 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.803688 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f8d5779-64d7-431f-9202-eaa876df3de4" (UID: "3f8d5779-64d7-431f-9202-eaa876df3de4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.806880 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4" (OuterVolumeSpecName: "kube-api-access-86mp4") pod "3f8d5779-64d7-431f-9202-eaa876df3de4" (UID: "3f8d5779-64d7-431f-9202-eaa876df3de4"). InnerVolumeSpecName "kube-api-access-86mp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.808089 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp" (OuterVolumeSpecName: "kube-api-access-bl9vp") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "kube-api-access-bl9vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.809181 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts" (OuterVolumeSpecName: "scripts") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.835540 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.896581 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905110 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86mp4\" (UniqueName: \"kubernetes.io/projected/3f8d5779-64d7-431f-9202-eaa876df3de4-kube-api-access-86mp4\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905154 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf285002-7308-4a9f-8c0c-ad21ae0b0667-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905168 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905182 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8d5779-64d7-431f-9202-eaa876df3de4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905196 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9vp\" (UniqueName: \"kubernetes.io/projected/bf285002-7308-4a9f-8c0c-ad21ae0b0667-kube-api-access-bl9vp\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905207 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.905218 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:54 crc kubenswrapper[4735]: I0131 15:16:54.910606 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data" (OuterVolumeSpecName: "config-data") pod "bf285002-7308-4a9f-8c0c-ad21ae0b0667" (UID: "bf285002-7308-4a9f-8c0c-ad21ae0b0667"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.006916 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf285002-7308-4a9f-8c0c-ad21ae0b0667-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.054735 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e898-account-create-update-bbb72" event={"ID":"3f8d5779-64d7-431f-9202-eaa876df3de4","Type":"ContainerDied","Data":"a1c71e0f84e5010adacdeae8945dbbf045033426ab16ace58550957ff7fd198b"} Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.054773 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c71e0f84e5010adacdeae8945dbbf045033426ab16ace58550957ff7fd198b" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.054788 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e898-account-create-update-bbb72" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.057457 4735 generic.go:334] "Generic (PLEG): container finished" podID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerID="9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756" exitCode=0 Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.057484 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerDied","Data":"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756"} Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.057496 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.057513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf285002-7308-4a9f-8c0c-ad21ae0b0667","Type":"ContainerDied","Data":"963057cff1abba66ab269165f34b7b83fe7d3d8bbf2dc666eec01f0972bcd446"} Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.057530 4735 scope.go:117] "RemoveContainer" containerID="dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.060924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" event={"ID":"c8b3a8e3-173f-4596-9712-ef4f7c324113","Type":"ContainerDied","Data":"619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf"} Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.060973 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619070749a222d411d4ebd2ed3d3b732e2a3df127e6c9b3f20102bc9839636cf" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.061048 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5ebc-account-create-update-pxltn" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.090189 4735 scope.go:117] "RemoveContainer" containerID="d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.116706 4735 scope.go:117] "RemoveContainer" containerID="5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.145756 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.160401 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168356 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168779 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b3a8e3-173f-4596-9712-ef4f7c324113" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168800 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b3a8e3-173f-4596-9712-ef4f7c324113" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168821 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a663199a-85bc-4931-bac4-6d060201ac38" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168830 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a663199a-85bc-4931-bac4-6d060201ac38" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168847 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-central-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168855 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-central-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168865 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b49ae1c-9cf5-4e9d-9142-502444638432" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168871 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b49ae1c-9cf5-4e9d-9142-502444638432" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168884 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="proxy-httpd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168890 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="proxy-httpd" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168897 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54b2315-f6e7-4c5a-ab66-51d808de8aa1" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168904 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54b2315-f6e7-4c5a-ab66-51d808de8aa1" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168916 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="sg-core" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168922 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="sg-core" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168935 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-notification-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168941 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-notification-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168949 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8d5779-64d7-431f-9202-eaa876df3de4" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168955 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8d5779-64d7-431f-9202-eaa876df3de4" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.168964 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bf75c5-68b8-41d5-8815-6f6120f2271c" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.168970 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bf75c5-68b8-41d5-8815-6f6120f2271c" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169153 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-central-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169171 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bf75c5-68b8-41d5-8815-6f6120f2271c" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169185 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54b2315-f6e7-4c5a-ab66-51d808de8aa1" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169193 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="ceilometer-notification-agent" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169206 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a663199a-85bc-4931-bac4-6d060201ac38" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169215 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8d5779-64d7-431f-9202-eaa876df3de4" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169225 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="sg-core" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169235 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b3a8e3-173f-4596-9712-ef4f7c324113" containerName="mariadb-account-create-update" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169245 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" containerName="proxy-httpd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.169254 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b49ae1c-9cf5-4e9d-9142-502444638432" containerName="mariadb-database-create" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.171155 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.171644 4735 scope.go:117] "RemoveContainer" containerID="9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.175498 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.175824 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.180515 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.201658 4735 scope.go:117] "RemoveContainer" containerID="dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.202145 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff\": container with ID starting with dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff not found: ID does not exist" containerID="dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.202197 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff"} err="failed to get container status \"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff\": rpc error: code = NotFound desc = could not find container \"dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff\": container with ID starting with dbd2dc55cdf792c36de0543e90b2343e93df78b390914082def2fe851648a3ff not found: ID does not exist" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.202232 4735 scope.go:117] "RemoveContainer" containerID="d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.202689 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d\": container with ID starting with d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d not found: ID does not exist" containerID="d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.202732 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d"} err="failed to get container status \"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d\": rpc error: code = NotFound desc = could not find container \"d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d\": container with ID starting with d199fcd3f04ed411638451ae2e2eb83b16ed5e2cf1150af588f238dde8c1559d not found: ID does not exist" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.202760 4735 scope.go:117] "RemoveContainer" containerID="5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.203216 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015\": container with ID starting with 5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015 not found: ID does not exist" containerID="5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.203299 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015"} err="failed to get container status \"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015\": rpc error: code = NotFound desc = could not find container \"5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015\": container with ID starting with 5328da0e35f9af2e0ba5002370b1496cc7b78a061f79bdcd352b8a98b1fcf015 not found: ID does not exist" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.203369 4735 scope.go:117] "RemoveContainer" containerID="9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756" Jan 31 15:16:55 crc kubenswrapper[4735]: E0131 15:16:55.203790 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756\": container with ID starting with 9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756 not found: ID does not exist" containerID="9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.203872 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756"} err="failed to get container status \"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756\": rpc error: code = NotFound desc = could not find container \"9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756\": container with ID starting with 9560c1ada42d588e1bbdd7aa3f63bcd3fe63cf7809bac8f87ee66fea07ee0756 not found: ID does not exist" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212371 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212480 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfx4\" (UniqueName: \"kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212503 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212545 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212611 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.212658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314146 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314689 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314723 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314777 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfx4\" (UniqueName: \"kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314906 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.314636 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.316543 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.321473 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.321803 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.322189 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.323611 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.332128 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfx4\" (UniqueName: \"kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4\") pod \"ceilometer-0\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.495094 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.555020 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf285002-7308-4a9f-8c0c-ad21ae0b0667" path="/var/lib/kubelet/pods/bf285002-7308-4a9f-8c0c-ad21ae0b0667/volumes" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.558709 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gddhd"] Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.567034 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.568853 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rz8sg" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.569408 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.569645 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.576241 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gddhd"] Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.620013 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.620238 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh82t\" (UniqueName: \"kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.620265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.620307 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.722499 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.722552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh82t\" (UniqueName: \"kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.722579 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.722640 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.727571 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.727977 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.737618 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.738032 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh82t\" (UniqueName: \"kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t\") pod \"nova-cell0-conductor-db-sync-gddhd\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.920010 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:16:55 crc kubenswrapper[4735]: I0131 15:16:55.974147 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:16:55 crc kubenswrapper[4735]: W0131 15:16:55.976875 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ec6049_21e4_46bc_9d63_b5c826ff0593.slice/crio-d3ef1cc63fd8275a8b46eddfd0fc599eeafb9978ed958bebea5b35ca39a6994f WatchSource:0}: Error finding container d3ef1cc63fd8275a8b46eddfd0fc599eeafb9978ed958bebea5b35ca39a6994f: Status 404 returned error can't find the container with id d3ef1cc63fd8275a8b46eddfd0fc599eeafb9978ed958bebea5b35ca39a6994f Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.071037 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerStarted","Data":"d3ef1cc63fd8275a8b46eddfd0fc599eeafb9978ed958bebea5b35ca39a6994f"} Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.122659 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.122707 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.155124 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.179284 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:56 crc kubenswrapper[4735]: I0131 15:16:56.370013 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gddhd"] Jan 31 15:16:56 crc kubenswrapper[4735]: W0131 15:16:56.379872 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578e9195_5203_4b74_ae9b_90137faafc8b.slice/crio-d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298 WatchSource:0}: Error finding container d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298: Status 404 returned error can't find the container with id d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298 Jan 31 15:16:57 crc kubenswrapper[4735]: I0131 15:16:57.093697 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gddhd" event={"ID":"578e9195-5203-4b74-ae9b-90137faafc8b","Type":"ContainerStarted","Data":"d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298"} Jan 31 15:16:57 crc kubenswrapper[4735]: I0131 15:16:57.096688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerStarted","Data":"9c7304d22af4bedff01e63d191f8cbc3e5be8ec4daefb71eddc7ab3393477959"} Jan 31 15:16:57 crc kubenswrapper[4735]: I0131 15:16:57.096783 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:57 crc kubenswrapper[4735]: I0131 15:16:57.096814 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:58 crc kubenswrapper[4735]: I0131 15:16:58.113038 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerStarted","Data":"7d2dda77af76f5e733eef355993ad13ba3e499485a19de89ba0956204407ebd2"} Jan 31 15:16:59 crc kubenswrapper[4735]: I0131 15:16:59.041088 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:59 crc kubenswrapper[4735]: I0131 15:16:59.111968 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 15:16:59 crc kubenswrapper[4735]: I0131 15:16:59.121910 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerStarted","Data":"f94af53ee5041a615cd23f74ca9508f2feaa1db631fab33acb72460ff9640b9b"} Jan 31 15:17:01 crc kubenswrapper[4735]: I0131 15:17:01.064506 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201340 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerStarted","Data":"1eea02330780fdf6badafd39726857c469d9127829185bc5d426f5e86f09342d"} Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201913 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201628 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="sg-core" containerID="cri-o://f94af53ee5041a615cd23f74ca9508f2feaa1db631fab33acb72460ff9640b9b" gracePeriod=30 Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201524 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-central-agent" containerID="cri-o://9c7304d22af4bedff01e63d191f8cbc3e5be8ec4daefb71eddc7ab3393477959" gracePeriod=30 Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201673 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="proxy-httpd" containerID="cri-o://1eea02330780fdf6badafd39726857c469d9127829185bc5d426f5e86f09342d" gracePeriod=30 Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.201643 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-notification-agent" containerID="cri-o://7d2dda77af76f5e733eef355993ad13ba3e499485a19de89ba0956204407ebd2" gracePeriod=30 Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.233466 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.54710415 podStartE2EDuration="12.233447908s" podCreationTimestamp="2026-01-31 15:16:55 +0000 UTC" firstStartedPulling="2026-01-31 15:16:55.980044882 +0000 UTC m=+1101.753373914" lastFinishedPulling="2026-01-31 15:17:05.66638862 +0000 UTC m=+1111.439717672" observedRunningTime="2026-01-31 15:17:07.230180655 +0000 UTC m=+1113.003509717" watchObservedRunningTime="2026-01-31 15:17:07.233447908 +0000 UTC m=+1113.006776960" Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.345907 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:17:07 crc kubenswrapper[4735]: I0131 15:17:07.346041 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.216749 4735 generic.go:334] "Generic (PLEG): container finished" podID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerID="1eea02330780fdf6badafd39726857c469d9127829185bc5d426f5e86f09342d" exitCode=0 Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.217135 4735 generic.go:334] "Generic (PLEG): container finished" podID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerID="f94af53ee5041a615cd23f74ca9508f2feaa1db631fab33acb72460ff9640b9b" exitCode=2 Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.217155 4735 generic.go:334] "Generic (PLEG): container finished" podID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerID="9c7304d22af4bedff01e63d191f8cbc3e5be8ec4daefb71eddc7ab3393477959" exitCode=0 Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.216897 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerDied","Data":"1eea02330780fdf6badafd39726857c469d9127829185bc5d426f5e86f09342d"} Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.217248 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerDied","Data":"f94af53ee5041a615cd23f74ca9508f2feaa1db631fab33acb72460ff9640b9b"} Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.217270 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerDied","Data":"9c7304d22af4bedff01e63d191f8cbc3e5be8ec4daefb71eddc7ab3393477959"} Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.219108 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gddhd" event={"ID":"578e9195-5203-4b74-ae9b-90137faafc8b","Type":"ContainerStarted","Data":"7053a470168586a1d76f370c5ce6a1d733a91484a850eb2452efddf927a25358"} Jan 31 15:17:08 crc kubenswrapper[4735]: I0131 15:17:08.251163 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gddhd" podStartSLOduration=2.566749407 podStartE2EDuration="13.251141527s" podCreationTimestamp="2026-01-31 15:16:55 +0000 UTC" firstStartedPulling="2026-01-31 15:16:56.38340076 +0000 UTC m=+1102.156729802" lastFinishedPulling="2026-01-31 15:17:07.06779284 +0000 UTC m=+1112.841121922" observedRunningTime="2026-01-31 15:17:08.238840209 +0000 UTC m=+1114.012169281" watchObservedRunningTime="2026-01-31 15:17:08.251141527 +0000 UTC m=+1114.024470579" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.231590 4735 generic.go:334] "Generic (PLEG): container finished" podID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerID="7d2dda77af76f5e733eef355993ad13ba3e499485a19de89ba0956204407ebd2" exitCode=0 Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.231661 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerDied","Data":"7d2dda77af76f5e733eef355993ad13ba3e499485a19de89ba0956204407ebd2"} Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.386468 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.541497 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.541642 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.541866 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxfx4\" (UniqueName: \"kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.541995 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.542087 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.542148 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.542201 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.542962 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.543162 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.548918 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4" (OuterVolumeSpecName: "kube-api-access-qxfx4") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "kube-api-access-qxfx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.549833 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts" (OuterVolumeSpecName: "scripts") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.591322 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.643841 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.646611 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") pod \"52ec6049-21e4-46bc-9d63-b5c826ff0593\" (UID: \"52ec6049-21e4-46bc-9d63-b5c826ff0593\") " Jan 31 15:17:09 crc kubenswrapper[4735]: W0131 15:17:09.646646 4735 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/52ec6049-21e4-46bc-9d63-b5c826ff0593/volumes/kubernetes.io~secret/combined-ca-bundle Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.646836 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.647792 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxfx4\" (UniqueName: \"kubernetes.io/projected/52ec6049-21e4-46bc-9d63-b5c826ff0593-kube-api-access-qxfx4\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.647936 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.647960 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.647977 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52ec6049-21e4-46bc-9d63-b5c826ff0593-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.647991 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.648004 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.670538 4735 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod15c8f74d-f504-4ddf-a823-12b35f0d65ba"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod15c8f74d-f504-4ddf-a823-12b35f0d65ba] : Timed out while waiting for systemd to remove kubepods-besteffort-pod15c8f74d_f504_4ddf_a823_12b35f0d65ba.slice" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.676021 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data" (OuterVolumeSpecName: "config-data") pod "52ec6049-21e4-46bc-9d63-b5c826ff0593" (UID: "52ec6049-21e4-46bc-9d63-b5c826ff0593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:09 crc kubenswrapper[4735]: I0131 15:17:09.750588 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ec6049-21e4-46bc-9d63-b5c826ff0593-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.248626 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"52ec6049-21e4-46bc-9d63-b5c826ff0593","Type":"ContainerDied","Data":"d3ef1cc63fd8275a8b46eddfd0fc599eeafb9978ed958bebea5b35ca39a6994f"} Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.248719 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.249844 4735 scope.go:117] "RemoveContainer" containerID="1eea02330780fdf6badafd39726857c469d9127829185bc5d426f5e86f09342d" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.286367 4735 scope.go:117] "RemoveContainer" containerID="f94af53ee5041a615cd23f74ca9508f2feaa1db631fab33acb72460ff9640b9b" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.324402 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.360539 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.361694 4735 scope.go:117] "RemoveContainer" containerID="7d2dda77af76f5e733eef355993ad13ba3e499485a19de89ba0956204407ebd2" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.381619 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:10 crc kubenswrapper[4735]: E0131 15:17:10.382716 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-central-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.382778 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-central-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: E0131 15:17:10.382807 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-notification-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.382822 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-notification-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: E0131 15:17:10.382903 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="sg-core" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.382917 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="sg-core" Jan 31 15:17:10 crc kubenswrapper[4735]: E0131 15:17:10.382997 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="proxy-httpd" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.383010 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="proxy-httpd" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.383738 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="proxy-httpd" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.383782 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="sg-core" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.383836 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-notification-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.383867 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" containerName="ceilometer-central-agent" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.386910 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.389331 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.389572 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.395265 4735 scope.go:117] "RemoveContainer" containerID="9c7304d22af4bedff01e63d191f8cbc3e5be8ec4daefb71eddc7ab3393477959" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.398622 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.464989 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465069 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhknz\" (UniqueName: \"kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465109 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465157 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465214 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.465286 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.566964 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhknz\" (UniqueName: \"kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567066 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567210 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567572 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567695 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.567772 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.568788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.569590 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.574165 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.574528 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.575687 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.576329 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.588382 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhknz\" (UniqueName: \"kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz\") pod \"ceilometer-0\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " pod="openstack/ceilometer-0" Jan 31 15:17:10 crc kubenswrapper[4735]: I0131 15:17:10.720877 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:11 crc kubenswrapper[4735]: I0131 15:17:11.215063 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:11 crc kubenswrapper[4735]: W0131 15:17:11.223921 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca15585_7b5d_4b8c_a2bb_4be714292716.slice/crio-0ad7f47ebdd1ebdb96a2a4bf7ce6376c5f9bc04bef54c19b11b8c8530b200455 WatchSource:0}: Error finding container 0ad7f47ebdd1ebdb96a2a4bf7ce6376c5f9bc04bef54c19b11b8c8530b200455: Status 404 returned error can't find the container with id 0ad7f47ebdd1ebdb96a2a4bf7ce6376c5f9bc04bef54c19b11b8c8530b200455 Jan 31 15:17:11 crc kubenswrapper[4735]: I0131 15:17:11.259835 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerStarted","Data":"0ad7f47ebdd1ebdb96a2a4bf7ce6376c5f9bc04bef54c19b11b8c8530b200455"} Jan 31 15:17:11 crc kubenswrapper[4735]: I0131 15:17:11.552525 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ec6049-21e4-46bc-9d63-b5c826ff0593" path="/var/lib/kubelet/pods/52ec6049-21e4-46bc-9d63-b5c826ff0593/volumes" Jan 31 15:17:12 crc kubenswrapper[4735]: I0131 15:17:12.271528 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerStarted","Data":"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4"} Jan 31 15:17:13 crc kubenswrapper[4735]: I0131 15:17:13.286930 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerStarted","Data":"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52"} Jan 31 15:17:13 crc kubenswrapper[4735]: I0131 15:17:13.287602 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerStarted","Data":"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8"} Jan 31 15:17:16 crc kubenswrapper[4735]: I0131 15:17:16.325289 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerStarted","Data":"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7"} Jan 31 15:17:16 crc kubenswrapper[4735]: I0131 15:17:16.327855 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:17:17 crc kubenswrapper[4735]: I0131 15:17:17.339359 4735 generic.go:334] "Generic (PLEG): container finished" podID="578e9195-5203-4b74-ae9b-90137faafc8b" containerID="7053a470168586a1d76f370c5ce6a1d733a91484a850eb2452efddf927a25358" exitCode=0 Jan 31 15:17:17 crc kubenswrapper[4735]: I0131 15:17:17.339475 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gddhd" event={"ID":"578e9195-5203-4b74-ae9b-90137faafc8b","Type":"ContainerDied","Data":"7053a470168586a1d76f370c5ce6a1d733a91484a850eb2452efddf927a25358"} Jan 31 15:17:17 crc kubenswrapper[4735]: I0131 15:17:17.367691 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.070503115 podStartE2EDuration="7.367674315s" podCreationTimestamp="2026-01-31 15:17:10 +0000 UTC" firstStartedPulling="2026-01-31 15:17:11.226121432 +0000 UTC m=+1116.999450474" lastFinishedPulling="2026-01-31 15:17:15.523292622 +0000 UTC m=+1121.296621674" observedRunningTime="2026-01-31 15:17:16.366563096 +0000 UTC m=+1122.139892158" watchObservedRunningTime="2026-01-31 15:17:17.367674315 +0000 UTC m=+1123.141003367" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.710479 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.845455 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh82t\" (UniqueName: \"kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t\") pod \"578e9195-5203-4b74-ae9b-90137faafc8b\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.845511 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts\") pod \"578e9195-5203-4b74-ae9b-90137faafc8b\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.845557 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle\") pod \"578e9195-5203-4b74-ae9b-90137faafc8b\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.845665 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data\") pod \"578e9195-5203-4b74-ae9b-90137faafc8b\" (UID: \"578e9195-5203-4b74-ae9b-90137faafc8b\") " Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.854127 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts" (OuterVolumeSpecName: "scripts") pod "578e9195-5203-4b74-ae9b-90137faafc8b" (UID: "578e9195-5203-4b74-ae9b-90137faafc8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.857411 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t" (OuterVolumeSpecName: "kube-api-access-zh82t") pod "578e9195-5203-4b74-ae9b-90137faafc8b" (UID: "578e9195-5203-4b74-ae9b-90137faafc8b"). InnerVolumeSpecName "kube-api-access-zh82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.882075 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "578e9195-5203-4b74-ae9b-90137faafc8b" (UID: "578e9195-5203-4b74-ae9b-90137faafc8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.903187 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data" (OuterVolumeSpecName: "config-data") pod "578e9195-5203-4b74-ae9b-90137faafc8b" (UID: "578e9195-5203-4b74-ae9b-90137faafc8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.947612 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh82t\" (UniqueName: \"kubernetes.io/projected/578e9195-5203-4b74-ae9b-90137faafc8b-kube-api-access-zh82t\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.947651 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.947666 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:18 crc kubenswrapper[4735]: I0131 15:17:18.947678 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578e9195-5203-4b74-ae9b-90137faafc8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.357088 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gddhd" event={"ID":"578e9195-5203-4b74-ae9b-90137faafc8b","Type":"ContainerDied","Data":"d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298"} Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.357123 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01a1984236b4765ff19556930b14afbe4f6d701318f034e54f4d3ff08dba298" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.357160 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gddhd" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.520922 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 15:17:19 crc kubenswrapper[4735]: E0131 15:17:19.521485 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578e9195-5203-4b74-ae9b-90137faafc8b" containerName="nova-cell0-conductor-db-sync" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.521509 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="578e9195-5203-4b74-ae9b-90137faafc8b" containerName="nova-cell0-conductor-db-sync" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.521734 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="578e9195-5203-4b74-ae9b-90137faafc8b" containerName="nova-cell0-conductor-db-sync" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.522427 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.527217 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.527355 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rz8sg" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.536805 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.559277 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.559557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvpc\" (UniqueName: \"kubernetes.io/projected/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-kube-api-access-rjvpc\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.559608 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.662372 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.663052 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvpc\" (UniqueName: \"kubernetes.io/projected/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-kube-api-access-rjvpc\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.663150 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.669248 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.678117 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.680340 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvpc\" (UniqueName: \"kubernetes.io/projected/2e5c1a1c-5aa4-428a-8729-77ce2cb81992-kube-api-access-rjvpc\") pod \"nova-cell0-conductor-0\" (UID: \"2e5c1a1c-5aa4-428a-8729-77ce2cb81992\") " pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:19 crc kubenswrapper[4735]: I0131 15:17:19.853130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:20 crc kubenswrapper[4735]: I0131 15:17:20.349066 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 15:17:20 crc kubenswrapper[4735]: W0131 15:17:20.349306 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e5c1a1c_5aa4_428a_8729_77ce2cb81992.slice/crio-993eb1ed02ccee0c96d3b71e7697ab9ff48bbcb9d0c7919746adda43904538ca WatchSource:0}: Error finding container 993eb1ed02ccee0c96d3b71e7697ab9ff48bbcb9d0c7919746adda43904538ca: Status 404 returned error can't find the container with id 993eb1ed02ccee0c96d3b71e7697ab9ff48bbcb9d0c7919746adda43904538ca Jan 31 15:17:20 crc kubenswrapper[4735]: I0131 15:17:20.376145 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e5c1a1c-5aa4-428a-8729-77ce2cb81992","Type":"ContainerStarted","Data":"993eb1ed02ccee0c96d3b71e7697ab9ff48bbcb9d0c7919746adda43904538ca"} Jan 31 15:17:21 crc kubenswrapper[4735]: I0131 15:17:21.389868 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e5c1a1c-5aa4-428a-8729-77ce2cb81992","Type":"ContainerStarted","Data":"00a85b63edcd0bccbd7d7fa8d89eb630f5326f2bffbc258330502c7f9d6b9d72"} Jan 31 15:17:21 crc kubenswrapper[4735]: I0131 15:17:21.390747 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:21 crc kubenswrapper[4735]: I0131 15:17:21.430134 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.430112668 podStartE2EDuration="2.430112668s" podCreationTimestamp="2026-01-31 15:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:21.414203017 +0000 UTC m=+1127.187532089" watchObservedRunningTime="2026-01-31 15:17:21.430112668 +0000 UTC m=+1127.203441720" Jan 31 15:17:29 crc kubenswrapper[4735]: I0131 15:17:29.897235 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.350083 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6p66x"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.352039 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.357003 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.360270 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6p66x"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.360946 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.437015 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.437659 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.437900 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhhs8\" (UniqueName: \"kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.438090 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.535475 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.536893 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.542637 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.543812 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhhs8\" (UniqueName: \"kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.543857 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.543997 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.544047 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.551978 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.555200 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.558504 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.567624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.571044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhhs8\" (UniqueName: \"kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8\") pod \"nova-cell0-cell-mapping-6p66x\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.645576 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.645758 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7r8\" (UniqueName: \"kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.645782 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.645869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.649684 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.651310 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.653291 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.670084 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.684564 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.691997 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.715682 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.715780 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.721965 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.747840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748127 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748213 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn7r8\" (UniqueName: \"kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748235 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748287 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cw6q\" (UniqueName: \"kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748331 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.748522 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.761785 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.763974 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.773138 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn7r8\" (UniqueName: \"kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8\") pod \"nova-api-0\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.816624 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.818173 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.821644 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.850081 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.851900 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmg44\" (UniqueName: \"kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.851938 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.851995 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cw6q\" (UniqueName: \"kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.852026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.861673 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.861826 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.870178 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.883202 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.890721 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cw6q\" (UniqueName: \"kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q\") pod \"nova-scheduler-0\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.953536 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.954994 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.958663 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.966726 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.966876 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.966922 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmg44\" (UniqueName: \"kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.966942 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.967000 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsz6z\" (UniqueName: \"kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.967032 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.967075 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.970225 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.970394 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.972116 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.990525 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:30 crc kubenswrapper[4735]: I0131 15:17:30.991078 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmg44\" (UniqueName: \"kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.069815 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.069883 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.069920 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.069944 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.069984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.070005 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.070078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsz6z\" (UniqueName: \"kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.070189 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.070315 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.070404 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djn2p\" (UniqueName: \"kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.071018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.078654 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.079951 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.098365 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsz6z\" (UniqueName: \"kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z\") pod \"nova-metadata-0\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.162040 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171506 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171559 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171668 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djn2p\" (UniqueName: \"kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171731 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171794 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.171843 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.173309 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.174069 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.175824 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.176880 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.177306 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.187169 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.195278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djn2p\" (UniqueName: \"kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p\") pod \"dnsmasq-dns-757b4f8459-xcg8z\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.295719 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.349267 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6p66x"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.480542 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fxdfs"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.482482 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.484653 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.485976 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.491824 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fxdfs"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.498064 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6p66x" event={"ID":"38bef073-0ba5-43e1-8532-cb868269bfc1","Type":"ContainerStarted","Data":"0128a4c2e59b6b2a75e7eb744a7cc2cd7fd4a2833acf1f4d4242cd3ad7c32e9f"} Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.504466 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.554588 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.583388 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442vg\" (UniqueName: \"kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.583595 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.583717 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.583808 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.685874 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-442vg\" (UniqueName: \"kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.685935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.685997 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.686037 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.692042 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.692233 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.693602 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.695524 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.729922 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-442vg\" (UniqueName: \"kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg\") pod \"nova-cell1-conductor-db-sync-fxdfs\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.820879 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:31 crc kubenswrapper[4735]: W0131 15:17:31.823455 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb419c4_43f6_4722_b209_87fc3ef3ea60.slice/crio-cab5bed17fca9f5194b3f04d0bc9d3b390f2f9262875bea1a1934fc93e8f6961 WatchSource:0}: Error finding container cab5bed17fca9f5194b3f04d0bc9d3b390f2f9262875bea1a1934fc93e8f6961: Status 404 returned error can't find the container with id cab5bed17fca9f5194b3f04d0bc9d3b390f2f9262875bea1a1934fc93e8f6961 Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.823911 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:31 crc kubenswrapper[4735]: I0131 15:17:31.935673 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.294717 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fxdfs"] Jan 31 15:17:32 crc kubenswrapper[4735]: W0131 15:17:32.299586 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b20a23a_80f2_4a93_81e2_062fec775d79.slice/crio-28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37 WatchSource:0}: Error finding container 28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37: Status 404 returned error can't find the container with id 28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37 Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.512267 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerStarted","Data":"67cad16ffe1a1f3f61dcd5bf651054c70e460e2a199613197c91be706f801a74"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.519908 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b397911-cf10-488e-af94-e0115c62b95b" containerID="294c687ce61d4086b42c2f6a8f20662cdd9ce785b26621e7302a517daa404e5d" exitCode=0 Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.520002 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" event={"ID":"2b397911-cf10-488e-af94-e0115c62b95b","Type":"ContainerDied","Data":"294c687ce61d4086b42c2f6a8f20662cdd9ce785b26621e7302a517daa404e5d"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.520030 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" event={"ID":"2b397911-cf10-488e-af94-e0115c62b95b","Type":"ContainerStarted","Data":"23da57fb5343c8a5ab3a8f9b5051e08a0a5c3df56889bfbd23a7f14f3821f990"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.533727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91529a48-98bc-4df9-b890-c6ad4eb96e2b","Type":"ContainerStarted","Data":"149ddf3f4353a99acbac73b6938ea59341e59ee316d91778467708db520e6c22"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.546378 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerStarted","Data":"cab5bed17fca9f5194b3f04d0bc9d3b390f2f9262875bea1a1934fc93e8f6961"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.579686 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1f9b8c3-b871-474d-92c4-9421274fdbbc","Type":"ContainerStarted","Data":"1cfbd95c2c6418046f28af9f871e6752df67070c06ee4f3781e36a2bc82aa938"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.587707 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" event={"ID":"8b20a23a-80f2-4a93-81e2-062fec775d79","Type":"ContainerStarted","Data":"28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.603765 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6p66x" event={"ID":"38bef073-0ba5-43e1-8532-cb868269bfc1","Type":"ContainerStarted","Data":"4547cb9cc23b5d8fba5621625bfb2f92b02a720a1bc302289e737536e9deb00e"} Jan 31 15:17:32 crc kubenswrapper[4735]: I0131 15:17:32.640159 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6p66x" podStartSLOduration=2.640133504 podStartE2EDuration="2.640133504s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:32.631025476 +0000 UTC m=+1138.404354538" watchObservedRunningTime="2026-01-31 15:17:32.640133504 +0000 UTC m=+1138.413462546" Jan 31 15:17:33 crc kubenswrapper[4735]: I0131 15:17:33.626580 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" event={"ID":"8b20a23a-80f2-4a93-81e2-062fec775d79","Type":"ContainerStarted","Data":"60d98f5271f1b92d7e1f95fa6aa1768ec8189fd62624e2a701a6dd18cb372368"} Jan 31 15:17:33 crc kubenswrapper[4735]: I0131 15:17:33.648664 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" podStartSLOduration=2.648647124 podStartE2EDuration="2.648647124s" podCreationTimestamp="2026-01-31 15:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:33.642119869 +0000 UTC m=+1139.415448931" watchObservedRunningTime="2026-01-31 15:17:33.648647124 +0000 UTC m=+1139.421976166" Jan 31 15:17:34 crc kubenswrapper[4735]: I0131 15:17:34.537578 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:34 crc kubenswrapper[4735]: I0131 15:17:34.585461 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.676119 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerStarted","Data":"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.676511 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerStarted","Data":"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.676688 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-log" containerID="cri-o://7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" gracePeriod=30 Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.677528 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-metadata" containerID="cri-o://a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" gracePeriod=30 Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.679154 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1f9b8c3-b871-474d-92c4-9421274fdbbc","Type":"ContainerStarted","Data":"c5bfc53fb7b8f0ee0d443dfea905b1af52ad046d79db9fa4f9144e106f6cc258"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.679283 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c5bfc53fb7b8f0ee0d443dfea905b1af52ad046d79db9fa4f9144e106f6cc258" gracePeriod=30 Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.685336 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerStarted","Data":"1fce3456a9d680dd9a35fcc2c2c0d4c58148a4a8fc1249820d4580d12bb0d074"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.685381 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerStarted","Data":"31ac21e0cbc815d22d592926db58da88c01c6ef087560513302ae2f50339b350"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.689086 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" event={"ID":"2b397911-cf10-488e-af94-e0115c62b95b","Type":"ContainerStarted","Data":"2b1d6086fc523bcd996138c556cfbd768e0bd50d2055029006498d8fe6b511e9"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.689677 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.693194 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91529a48-98bc-4df9-b890-c6ad4eb96e2b","Type":"ContainerStarted","Data":"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f"} Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.702074 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.993558046 podStartE2EDuration="5.702058315s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="2026-01-31 15:17:31.825517513 +0000 UTC m=+1137.598846555" lastFinishedPulling="2026-01-31 15:17:34.534017782 +0000 UTC m=+1140.307346824" observedRunningTime="2026-01-31 15:17:35.698993928 +0000 UTC m=+1141.472322970" watchObservedRunningTime="2026-01-31 15:17:35.702058315 +0000 UTC m=+1141.475387357" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.723678 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.763298766 podStartE2EDuration="5.723659658s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="2026-01-31 15:17:31.560662972 +0000 UTC m=+1137.333992014" lastFinishedPulling="2026-01-31 15:17:34.521023864 +0000 UTC m=+1140.294352906" observedRunningTime="2026-01-31 15:17:35.715095685 +0000 UTC m=+1141.488424767" watchObservedRunningTime="2026-01-31 15:17:35.723659658 +0000 UTC m=+1141.496988700" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.740049 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.934591224 podStartE2EDuration="5.740026692s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="2026-01-31 15:17:31.7178552 +0000 UTC m=+1137.491184242" lastFinishedPulling="2026-01-31 15:17:34.523290668 +0000 UTC m=+1140.296619710" observedRunningTime="2026-01-31 15:17:35.729703719 +0000 UTC m=+1141.503032771" watchObservedRunningTime="2026-01-31 15:17:35.740026692 +0000 UTC m=+1141.513355734" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.788967 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" podStartSLOduration=5.788947559 podStartE2EDuration="5.788947559s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:35.774195261 +0000 UTC m=+1141.547524313" watchObservedRunningTime="2026-01-31 15:17:35.788947559 +0000 UTC m=+1141.562276601" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.793109 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.804856195 podStartE2EDuration="5.793094087s" podCreationTimestamp="2026-01-31 15:17:30 +0000 UTC" firstStartedPulling="2026-01-31 15:17:31.511322493 +0000 UTC m=+1137.284651535" lastFinishedPulling="2026-01-31 15:17:34.499560385 +0000 UTC m=+1140.272889427" observedRunningTime="2026-01-31 15:17:35.792132729 +0000 UTC m=+1141.565461761" watchObservedRunningTime="2026-01-31 15:17:35.793094087 +0000 UTC m=+1141.566423129" Jan 31 15:17:35 crc kubenswrapper[4735]: I0131 15:17:35.971321 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.163453 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.187584 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.187626 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.692709 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705466 4735 generic.go:334] "Generic (PLEG): container finished" podID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerID="a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" exitCode=0 Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705493 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerDied","Data":"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5"} Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705505 4735 generic.go:334] "Generic (PLEG): container finished" podID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerID="7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" exitCode=143 Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705478 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705531 4735 scope.go:117] "RemoveContainer" containerID="a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705522 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerDied","Data":"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b"} Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.705694 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ccb419c4-43f6-4722-b209-87fc3ef3ea60","Type":"ContainerDied","Data":"cab5bed17fca9f5194b3f04d0bc9d3b390f2f9262875bea1a1934fc93e8f6961"} Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.735223 4735 scope.go:117] "RemoveContainer" containerID="7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.769689 4735 scope.go:117] "RemoveContainer" containerID="a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" Jan 31 15:17:36 crc kubenswrapper[4735]: E0131 15:17:36.770205 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5\": container with ID starting with a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5 not found: ID does not exist" containerID="a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770240 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5"} err="failed to get container status \"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5\": rpc error: code = NotFound desc = could not find container \"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5\": container with ID starting with a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5 not found: ID does not exist" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770267 4735 scope.go:117] "RemoveContainer" containerID="7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" Jan 31 15:17:36 crc kubenswrapper[4735]: E0131 15:17:36.770665 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b\": container with ID starting with 7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b not found: ID does not exist" containerID="7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770690 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b"} err="failed to get container status \"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b\": rpc error: code = NotFound desc = could not find container \"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b\": container with ID starting with 7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b not found: ID does not exist" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770708 4735 scope.go:117] "RemoveContainer" containerID="a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770955 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5"} err="failed to get container status \"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5\": rpc error: code = NotFound desc = could not find container \"a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5\": container with ID starting with a3a985d1afa59d80dec98621362f1b3f0ad6bbdb38df2a44c49457a356e2c2a5 not found: ID does not exist" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.770981 4735 scope.go:117] "RemoveContainer" containerID="7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.771227 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b"} err="failed to get container status \"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b\": rpc error: code = NotFound desc = could not find container \"7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b\": container with ID starting with 7e8cdb12dd2aeb53b6e7fd7823404ebe58dab29a18d571b47375913c202bdc9b not found: ID does not exist" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.817279 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle\") pod \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.817458 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data\") pod \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.817538 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsz6z\" (UniqueName: \"kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z\") pod \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.817562 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs\") pod \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\" (UID: \"ccb419c4-43f6-4722-b209-87fc3ef3ea60\") " Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.817883 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs" (OuterVolumeSpecName: "logs") pod "ccb419c4-43f6-4722-b209-87fc3ef3ea60" (UID: "ccb419c4-43f6-4722-b209-87fc3ef3ea60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.818903 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb419c4-43f6-4722-b209-87fc3ef3ea60-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.822639 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z" (OuterVolumeSpecName: "kube-api-access-hsz6z") pod "ccb419c4-43f6-4722-b209-87fc3ef3ea60" (UID: "ccb419c4-43f6-4722-b209-87fc3ef3ea60"). InnerVolumeSpecName "kube-api-access-hsz6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.845516 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccb419c4-43f6-4722-b209-87fc3ef3ea60" (UID: "ccb419c4-43f6-4722-b209-87fc3ef3ea60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.870109 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data" (OuterVolumeSpecName: "config-data") pod "ccb419c4-43f6-4722-b209-87fc3ef3ea60" (UID: "ccb419c4-43f6-4722-b209-87fc3ef3ea60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.920774 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.920807 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsz6z\" (UniqueName: \"kubernetes.io/projected/ccb419c4-43f6-4722-b209-87fc3ef3ea60-kube-api-access-hsz6z\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:36 crc kubenswrapper[4735]: I0131 15:17:36.920818 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb419c4-43f6-4722-b209-87fc3ef3ea60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.040971 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.055012 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.064890 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:37 crc kubenswrapper[4735]: E0131 15:17:37.065804 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-metadata" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.065908 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-metadata" Jan 31 15:17:37 crc kubenswrapper[4735]: E0131 15:17:37.065992 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-log" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.066063 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-log" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.066463 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-log" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.066558 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" containerName="nova-metadata-metadata" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.067830 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.070398 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.070676 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.088103 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.225981 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.226031 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t5x\" (UniqueName: \"kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.226074 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.226109 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.226176 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.327407 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.327613 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.327677 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.327711 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t5x\" (UniqueName: \"kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.327762 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.331066 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.331489 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.331889 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.340318 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.346360 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.346404 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.347393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t5x\" (UniqueName: \"kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x\") pod \"nova-metadata-0\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.407433 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.551266 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb419c4-43f6-4722-b209-87fc3ef3ea60" path="/var/lib/kubelet/pods/ccb419c4-43f6-4722-b209-87fc3ef3ea60/volumes" Jan 31 15:17:37 crc kubenswrapper[4735]: I0131 15:17:37.896314 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:37 crc kubenswrapper[4735]: W0131 15:17:37.900974 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod318bde6e_c35c_4cb4_947d_4772b0d8f148.slice/crio-020e8e398879331a1a963e05ac35c19b32639e7adf97ba114e2009747f97f614 WatchSource:0}: Error finding container 020e8e398879331a1a963e05ac35c19b32639e7adf97ba114e2009747f97f614: Status 404 returned error can't find the container with id 020e8e398879331a1a963e05ac35c19b32639e7adf97ba114e2009747f97f614 Jan 31 15:17:38 crc kubenswrapper[4735]: I0131 15:17:38.730479 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerStarted","Data":"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a"} Jan 31 15:17:38 crc kubenswrapper[4735]: I0131 15:17:38.731121 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerStarted","Data":"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3"} Jan 31 15:17:38 crc kubenswrapper[4735]: I0131 15:17:38.731134 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerStarted","Data":"020e8e398879331a1a963e05ac35c19b32639e7adf97ba114e2009747f97f614"} Jan 31 15:17:38 crc kubenswrapper[4735]: I0131 15:17:38.761693 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.76167173 podStartE2EDuration="1.76167173s" podCreationTimestamp="2026-01-31 15:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:38.752384857 +0000 UTC m=+1144.525713919" watchObservedRunningTime="2026-01-31 15:17:38.76167173 +0000 UTC m=+1144.535000782" Jan 31 15:17:39 crc kubenswrapper[4735]: I0131 15:17:39.744818 4735 generic.go:334] "Generic (PLEG): container finished" podID="8b20a23a-80f2-4a93-81e2-062fec775d79" containerID="60d98f5271f1b92d7e1f95fa6aa1768ec8189fd62624e2a701a6dd18cb372368" exitCode=0 Jan 31 15:17:39 crc kubenswrapper[4735]: I0131 15:17:39.744884 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" event={"ID":"8b20a23a-80f2-4a93-81e2-062fec775d79","Type":"ContainerDied","Data":"60d98f5271f1b92d7e1f95fa6aa1768ec8189fd62624e2a701a6dd18cb372368"} Jan 31 15:17:39 crc kubenswrapper[4735]: I0131 15:17:39.751369 4735 generic.go:334] "Generic (PLEG): container finished" podID="38bef073-0ba5-43e1-8532-cb868269bfc1" containerID="4547cb9cc23b5d8fba5621625bfb2f92b02a720a1bc302289e737536e9deb00e" exitCode=0 Jan 31 15:17:39 crc kubenswrapper[4735]: I0131 15:17:39.751513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6p66x" event={"ID":"38bef073-0ba5-43e1-8532-cb868269bfc1","Type":"ContainerDied","Data":"4547cb9cc23b5d8fba5621625bfb2f92b02a720a1bc302289e737536e9deb00e"} Jan 31 15:17:40 crc kubenswrapper[4735]: I0131 15:17:40.728626 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 15:17:40 crc kubenswrapper[4735]: I0131 15:17:40.956555 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:17:40 crc kubenswrapper[4735]: I0131 15:17:40.956871 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:17:40 crc kubenswrapper[4735]: I0131 15:17:40.970934 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.042098 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.293867 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.298796 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.299403 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.350869 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts\") pod \"8b20a23a-80f2-4a93-81e2-062fec775d79\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.350912 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data\") pod \"8b20a23a-80f2-4a93-81e2-062fec775d79\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.350934 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-442vg\" (UniqueName: \"kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg\") pod \"8b20a23a-80f2-4a93-81e2-062fec775d79\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.351011 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts\") pod \"38bef073-0ba5-43e1-8532-cb868269bfc1\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.351037 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhhs8\" (UniqueName: \"kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8\") pod \"38bef073-0ba5-43e1-8532-cb868269bfc1\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.351155 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle\") pod \"38bef073-0ba5-43e1-8532-cb868269bfc1\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.351231 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle\") pod \"8b20a23a-80f2-4a93-81e2-062fec775d79\" (UID: \"8b20a23a-80f2-4a93-81e2-062fec775d79\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.351255 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data\") pod \"38bef073-0ba5-43e1-8532-cb868269bfc1\" (UID: \"38bef073-0ba5-43e1-8532-cb868269bfc1\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.367660 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts" (OuterVolumeSpecName: "scripts") pod "8b20a23a-80f2-4a93-81e2-062fec775d79" (UID: "8b20a23a-80f2-4a93-81e2-062fec775d79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.368560 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts" (OuterVolumeSpecName: "scripts") pod "38bef073-0ba5-43e1-8532-cb868269bfc1" (UID: "38bef073-0ba5-43e1-8532-cb868269bfc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.368569 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg" (OuterVolumeSpecName: "kube-api-access-442vg") pod "8b20a23a-80f2-4a93-81e2-062fec775d79" (UID: "8b20a23a-80f2-4a93-81e2-062fec775d79"). InnerVolumeSpecName "kube-api-access-442vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.371759 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8" (OuterVolumeSpecName: "kube-api-access-dhhs8") pod "38bef073-0ba5-43e1-8532-cb868269bfc1" (UID: "38bef073-0ba5-43e1-8532-cb868269bfc1"). InnerVolumeSpecName "kube-api-access-dhhs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.412518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data" (OuterVolumeSpecName: "config-data") pod "38bef073-0ba5-43e1-8532-cb868269bfc1" (UID: "38bef073-0ba5-43e1-8532-cb868269bfc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.425034 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.425585 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="dnsmasq-dns" containerID="cri-o://126f609286877bdc472e0350ee5ae624b8d437690dff7d0f597c352d84fbf723" gracePeriod=10 Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.436515 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data" (OuterVolumeSpecName: "config-data") pod "8b20a23a-80f2-4a93-81e2-062fec775d79" (UID: "8b20a23a-80f2-4a93-81e2-062fec775d79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454113 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454349 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454485 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454608 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-442vg\" (UniqueName: \"kubernetes.io/projected/8b20a23a-80f2-4a93-81e2-062fec775d79-kube-api-access-442vg\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454695 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.454862 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhhs8\" (UniqueName: \"kubernetes.io/projected/38bef073-0ba5-43e1-8532-cb868269bfc1-kube-api-access-dhhs8\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.455643 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38bef073-0ba5-43e1-8532-cb868269bfc1" (UID: "38bef073-0ba5-43e1-8532-cb868269bfc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.479883 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b20a23a-80f2-4a93-81e2-062fec775d79" (UID: "8b20a23a-80f2-4a93-81e2-062fec775d79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.558446 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38bef073-0ba5-43e1-8532-cb868269bfc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.558479 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b20a23a-80f2-4a93-81e2-062fec775d79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.781756 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" event={"ID":"8b20a23a-80f2-4a93-81e2-062fec775d79","Type":"ContainerDied","Data":"28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37"} Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.781808 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b0e5f541e5e57b6f5268601e65a5937b01c4857f2326c5a4ec2b5ab7120a37" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.781889 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fxdfs" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.786090 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6p66x" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.786107 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6p66x" event={"ID":"38bef073-0ba5-43e1-8532-cb868269bfc1","Type":"ContainerDied","Data":"0128a4c2e59b6b2a75e7eb744a7cc2cd7fd4a2833acf1f4d4242cd3ad7c32e9f"} Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.786587 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0128a4c2e59b6b2a75e7eb744a7cc2cd7fd4a2833acf1f4d4242cd3ad7c32e9f" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.788697 4735 generic.go:334] "Generic (PLEG): container finished" podID="f56db27d-6892-4506-b605-6198658b7f6d" containerID="126f609286877bdc472e0350ee5ae624b8d437690dff7d0f597c352d84fbf723" exitCode=0 Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.788891 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" event={"ID":"f56db27d-6892-4506-b605-6198658b7f6d","Type":"ContainerDied","Data":"126f609286877bdc472e0350ee5ae624b8d437690dff7d0f597c352d84fbf723"} Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.858601 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.876072 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 15:17:41 crc kubenswrapper[4735]: E0131 15:17:41.876465 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b20a23a-80f2-4a93-81e2-062fec775d79" containerName="nova-cell1-conductor-db-sync" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.876476 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b20a23a-80f2-4a93-81e2-062fec775d79" containerName="nova-cell1-conductor-db-sync" Jan 31 15:17:41 crc kubenswrapper[4735]: E0131 15:17:41.876502 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bef073-0ba5-43e1-8532-cb868269bfc1" containerName="nova-manage" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.876508 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bef073-0ba5-43e1-8532-cb868269bfc1" containerName="nova-manage" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.876661 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="38bef073-0ba5-43e1-8532-cb868269bfc1" containerName="nova-manage" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.876678 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b20a23a-80f2-4a93-81e2-062fec775d79" containerName="nova-cell1-conductor-db-sync" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.877970 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.880018 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.890278 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.892440 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967793 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b58w\" (UniqueName: \"kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967881 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967903 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967924 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967957 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.967973 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb\") pod \"f56db27d-6892-4506-b605-6198658b7f6d\" (UID: \"f56db27d-6892-4506-b605-6198658b7f6d\") " Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.968113 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlhj\" (UniqueName: \"kubernetes.io/projected/1a55c776-7b88-4c64-a106-b2f8619425a7-kube-api-access-dwlhj\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.968140 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.968191 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:41 crc kubenswrapper[4735]: I0131 15:17:41.972559 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w" (OuterVolumeSpecName: "kube-api-access-6b58w") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "kube-api-access-6b58w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.017848 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.019070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config" (OuterVolumeSpecName: "config") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.026980 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.039478 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.043786 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.043893 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.047108 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f56db27d-6892-4506-b605-6198658b7f6d" (UID: "f56db27d-6892-4506-b605-6198658b7f6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.059694 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.059920 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-log" containerID="cri-o://31ac21e0cbc815d22d592926db58da88c01c6ef087560513302ae2f50339b350" gracePeriod=30 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.060056 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-api" containerID="cri-o://1fce3456a9d680dd9a35fcc2c2c0d4c58148a4a8fc1249820d4580d12bb0d074" gracePeriod=30 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwlhj\" (UniqueName: \"kubernetes.io/projected/1a55c776-7b88-4c64-a106-b2f8619425a7-kube-api-access-dwlhj\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069597 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069654 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069925 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069937 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069945 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069957 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069966 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f56db27d-6892-4506-b605-6198658b7f6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.069977 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b58w\" (UniqueName: \"kubernetes.io/projected/f56db27d-6892-4506-b605-6198658b7f6d-kube-api-access-6b58w\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.074339 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.075127 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55c776-7b88-4c64-a106-b2f8619425a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.087883 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwlhj\" (UniqueName: \"kubernetes.io/projected/1a55c776-7b88-4c64-a106-b2f8619425a7-kube-api-access-dwlhj\") pod \"nova-cell1-conductor-0\" (UID: \"1a55c776-7b88-4c64-a106-b2f8619425a7\") " pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.113471 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.113747 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-log" containerID="cri-o://c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" gracePeriod=30 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.113833 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-metadata" containerID="cri-o://cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" gracePeriod=30 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.203061 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.295075 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.409376 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.409698 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.677755 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 15:17:42 crc kubenswrapper[4735]: W0131 15:17:42.682914 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a55c776_7b88_4c64_a106_b2f8619425a7.slice/crio-3c23f061656888d52999327c424341a6734a76706b0fb1a5e495933632d23a14 WatchSource:0}: Error finding container 3c23f061656888d52999327c424341a6734a76706b0fb1a5e495933632d23a14: Status 404 returned error can't find the container with id 3c23f061656888d52999327c424341a6734a76706b0fb1a5e495933632d23a14 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.727857 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.786701 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data\") pod \"318bde6e-c35c-4cb4-947d-4772b0d8f148\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.786762 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle\") pod \"318bde6e-c35c-4cb4-947d-4772b0d8f148\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.786803 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs\") pod \"318bde6e-c35c-4cb4-947d-4772b0d8f148\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.786926 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t5x\" (UniqueName: \"kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x\") pod \"318bde6e-c35c-4cb4-947d-4772b0d8f148\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.786996 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs\") pod \"318bde6e-c35c-4cb4-947d-4772b0d8f148\" (UID: \"318bde6e-c35c-4cb4-947d-4772b0d8f148\") " Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.788001 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs" (OuterVolumeSpecName: "logs") pod "318bde6e-c35c-4cb4-947d-4772b0d8f148" (UID: "318bde6e-c35c-4cb4-947d-4772b0d8f148"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.799158 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x" (OuterVolumeSpecName: "kube-api-access-d6t5x") pod "318bde6e-c35c-4cb4-947d-4772b0d8f148" (UID: "318bde6e-c35c-4cb4-947d-4772b0d8f148"). InnerVolumeSpecName "kube-api-access-d6t5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823079 4735 generic.go:334] "Generic (PLEG): container finished" podID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerID="cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" exitCode=0 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823120 4735 generic.go:334] "Generic (PLEG): container finished" podID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerID="c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" exitCode=143 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823188 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerDied","Data":"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823201 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823224 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerDied","Data":"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823239 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"318bde6e-c35c-4cb4-947d-4772b0d8f148","Type":"ContainerDied","Data":"020e8e398879331a1a963e05ac35c19b32639e7adf97ba114e2009747f97f614"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.823258 4735 scope.go:117] "RemoveContainer" containerID="cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.826852 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" event={"ID":"f56db27d-6892-4506-b605-6198658b7f6d","Type":"ContainerDied","Data":"35f2142428a6f80c2bc5e871baff12076b9ba8422fb7bf09e6d7f10f82b4c35e"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.826950 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-ngkj8" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.831155 4735 generic.go:334] "Generic (PLEG): container finished" podID="00cc8be9-5204-43ea-a723-a25e411adecc" containerID="31ac21e0cbc815d22d592926db58da88c01c6ef087560513302ae2f50339b350" exitCode=143 Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.831237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerDied","Data":"31ac21e0cbc815d22d592926db58da88c01c6ef087560513302ae2f50339b350"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.833198 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a55c776-7b88-4c64-a106-b2f8619425a7","Type":"ContainerStarted","Data":"3c23f061656888d52999327c424341a6734a76706b0fb1a5e495933632d23a14"} Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.848449 4735 scope.go:117] "RemoveContainer" containerID="c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.865982 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.873620 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data" (OuterVolumeSpecName: "config-data") pod "318bde6e-c35c-4cb4-947d-4772b0d8f148" (UID: "318bde6e-c35c-4cb4-947d-4772b0d8f148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.876084 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "318bde6e-c35c-4cb4-947d-4772b0d8f148" (UID: "318bde6e-c35c-4cb4-947d-4772b0d8f148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.876376 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-ngkj8"] Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.888509 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/318bde6e-c35c-4cb4-947d-4772b0d8f148-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.888534 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.888545 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.888553 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t5x\" (UniqueName: \"kubernetes.io/projected/318bde6e-c35c-4cb4-947d-4772b0d8f148-kube-api-access-d6t5x\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.890272 4735 scope.go:117] "RemoveContainer" containerID="cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" Jan 31 15:17:42 crc kubenswrapper[4735]: E0131 15:17:42.890740 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a\": container with ID starting with cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a not found: ID does not exist" containerID="cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.890765 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a"} err="failed to get container status \"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a\": rpc error: code = NotFound desc = could not find container \"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a\": container with ID starting with cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a not found: ID does not exist" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.890784 4735 scope.go:117] "RemoveContainer" containerID="c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" Jan 31 15:17:42 crc kubenswrapper[4735]: E0131 15:17:42.891021 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3\": container with ID starting with c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3 not found: ID does not exist" containerID="c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891044 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3"} err="failed to get container status \"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3\": rpc error: code = NotFound desc = could not find container \"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3\": container with ID starting with c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3 not found: ID does not exist" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891059 4735 scope.go:117] "RemoveContainer" containerID="cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891224 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a"} err="failed to get container status \"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a\": rpc error: code = NotFound desc = could not find container \"cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a\": container with ID starting with cfcd8b9c2224629835708c4f051a7ab3ed1f77c79d30fce4014718c13acf2c7a not found: ID does not exist" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891242 4735 scope.go:117] "RemoveContainer" containerID="c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891387 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3"} err="failed to get container status \"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3\": rpc error: code = NotFound desc = could not find container \"c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3\": container with ID starting with c876e2e35bc8fbb85a104c7f02aa025af5ccc08963317597532c3d90c00182a3 not found: ID does not exist" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.891403 4735 scope.go:117] "RemoveContainer" containerID="126f609286877bdc472e0350ee5ae624b8d437690dff7d0f597c352d84fbf723" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.893472 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "318bde6e-c35c-4cb4-947d-4772b0d8f148" (UID: "318bde6e-c35c-4cb4-947d-4772b0d8f148"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.908786 4735 scope.go:117] "RemoveContainer" containerID="5894c58035f656275e52344fc96699db27c30eab24ac59ee56e62612df8532c4" Jan 31 15:17:42 crc kubenswrapper[4735]: I0131 15:17:42.990350 4735 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/318bde6e-c35c-4cb4-947d-4772b0d8f148-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.167251 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.190298 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202157 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:43 crc kubenswrapper[4735]: E0131 15:17:43.202669 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="dnsmasq-dns" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202692 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="dnsmasq-dns" Jan 31 15:17:43 crc kubenswrapper[4735]: E0131 15:17:43.202717 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="init" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202724 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="init" Jan 31 15:17:43 crc kubenswrapper[4735]: E0131 15:17:43.202735 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-metadata" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202741 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-metadata" Jan 31 15:17:43 crc kubenswrapper[4735]: E0131 15:17:43.202758 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-log" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202763 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-log" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202941 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-metadata" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202960 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56db27d-6892-4506-b605-6198658b7f6d" containerName="dnsmasq-dns" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.202972 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" containerName="nova-metadata-log" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.204300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.206742 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.207452 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.225320 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.295951 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.296017 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxdpg\" (UniqueName: \"kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.296079 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.296151 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.296199 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.398606 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.399735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxdpg\" (UniqueName: \"kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.399836 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.399936 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.400001 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.400712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.405953 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.406620 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.406980 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.427285 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxdpg\" (UniqueName: \"kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg\") pod \"nova-metadata-0\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.548913 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318bde6e-c35c-4cb4-947d-4772b0d8f148" path="/var/lib/kubelet/pods/318bde6e-c35c-4cb4-947d-4772b0d8f148/volumes" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.549504 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56db27d-6892-4506-b605-6198658b7f6d" path="/var/lib/kubelet/pods/f56db27d-6892-4506-b605-6198658b7f6d/volumes" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.570960 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.847526 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1a55c776-7b88-4c64-a106-b2f8619425a7","Type":"ContainerStarted","Data":"15b52a05b2760314a3900af6a770d4460ad2a6ceb066fa5a649cf76f03720f47"} Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.848494 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.852759 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerName="nova-scheduler-scheduler" containerID="cri-o://44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" gracePeriod=30 Jan 31 15:17:43 crc kubenswrapper[4735]: I0131 15:17:43.874265 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.874251194 podStartE2EDuration="2.874251194s" podCreationTimestamp="2026-01-31 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:43.867270416 +0000 UTC m=+1149.640599458" watchObservedRunningTime="2026-01-31 15:17:43.874251194 +0000 UTC m=+1149.647580236" Jan 31 15:17:44 crc kubenswrapper[4735]: W0131 15:17:44.153524 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1272c508_e793_42ce_b985_74c083b4b70c.slice/crio-ce6cd982a49e9bcfcf098eae1ebec00ef07d49d58970dd5450b5a62a46d2dfc9 WatchSource:0}: Error finding container ce6cd982a49e9bcfcf098eae1ebec00ef07d49d58970dd5450b5a62a46d2dfc9: Status 404 returned error can't find the container with id ce6cd982a49e9bcfcf098eae1ebec00ef07d49d58970dd5450b5a62a46d2dfc9 Jan 31 15:17:44 crc kubenswrapper[4735]: I0131 15:17:44.177919 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:17:44 crc kubenswrapper[4735]: I0131 15:17:44.868641 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerStarted","Data":"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265"} Jan 31 15:17:44 crc kubenswrapper[4735]: I0131 15:17:44.869207 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerStarted","Data":"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902"} Jan 31 15:17:44 crc kubenswrapper[4735]: I0131 15:17:44.869236 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerStarted","Data":"ce6cd982a49e9bcfcf098eae1ebec00ef07d49d58970dd5450b5a62a46d2dfc9"} Jan 31 15:17:44 crc kubenswrapper[4735]: I0131 15:17:44.940017 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.9399956870000001 podStartE2EDuration="1.939995687s" podCreationTimestamp="2026-01-31 15:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:44.921865902 +0000 UTC m=+1150.695194964" watchObservedRunningTime="2026-01-31 15:17:44.939995687 +0000 UTC m=+1150.713324739" Jan 31 15:17:45 crc kubenswrapper[4735]: I0131 15:17:45.491286 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:45 crc kubenswrapper[4735]: I0131 15:17:45.492101 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" containerName="kube-state-metrics" containerID="cri-o://323fb601661e4add284c619f2e09a68cb2a2c6496ad7bba5a5e3aa936d97a678" gracePeriod=30 Jan 31 15:17:45 crc kubenswrapper[4735]: I0131 15:17:45.878380 4735 generic.go:334] "Generic (PLEG): container finished" podID="bf00fd0b-9de0-4726-ae79-94596a39fffe" containerID="323fb601661e4add284c619f2e09a68cb2a2c6496ad7bba5a5e3aa936d97a678" exitCode=2 Jan 31 15:17:45 crc kubenswrapper[4735]: I0131 15:17:45.878549 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf00fd0b-9de0-4726-ae79-94596a39fffe","Type":"ContainerDied","Data":"323fb601661e4add284c619f2e09a68cb2a2c6496ad7bba5a5e3aa936d97a678"} Jan 31 15:17:45 crc kubenswrapper[4735]: E0131 15:17:45.973107 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 15:17:45 crc kubenswrapper[4735]: E0131 15:17:45.974551 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 15:17:45 crc kubenswrapper[4735]: E0131 15:17:45.978727 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 15:17:45 crc kubenswrapper[4735]: E0131 15:17:45.978763 4735 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerName="nova-scheduler-scheduler" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.061330 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.148517 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7dm8\" (UniqueName: \"kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8\") pod \"bf00fd0b-9de0-4726-ae79-94596a39fffe\" (UID: \"bf00fd0b-9de0-4726-ae79-94596a39fffe\") " Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.161610 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8" (OuterVolumeSpecName: "kube-api-access-x7dm8") pod "bf00fd0b-9de0-4726-ae79-94596a39fffe" (UID: "bf00fd0b-9de0-4726-ae79-94596a39fffe"). InnerVolumeSpecName "kube-api-access-x7dm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.250925 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7dm8\" (UniqueName: \"kubernetes.io/projected/bf00fd0b-9de0-4726-ae79-94596a39fffe-kube-api-access-x7dm8\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.511759 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.657953 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data\") pod \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.659302 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cw6q\" (UniqueName: \"kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q\") pod \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.660495 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle\") pod \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\" (UID: \"91529a48-98bc-4df9-b890-c6ad4eb96e2b\") " Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.663853 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q" (OuterVolumeSpecName: "kube-api-access-9cw6q") pod "91529a48-98bc-4df9-b890-c6ad4eb96e2b" (UID: "91529a48-98bc-4df9-b890-c6ad4eb96e2b"). InnerVolumeSpecName "kube-api-access-9cw6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.666807 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cw6q\" (UniqueName: \"kubernetes.io/projected/91529a48-98bc-4df9-b890-c6ad4eb96e2b-kube-api-access-9cw6q\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.706683 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data" (OuterVolumeSpecName: "config-data") pod "91529a48-98bc-4df9-b890-c6ad4eb96e2b" (UID: "91529a48-98bc-4df9-b890-c6ad4eb96e2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.709594 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91529a48-98bc-4df9-b890-c6ad4eb96e2b" (UID: "91529a48-98bc-4df9-b890-c6ad4eb96e2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.768535 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.768566 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91529a48-98bc-4df9-b890-c6ad4eb96e2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.888660 4735 generic.go:334] "Generic (PLEG): container finished" podID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" exitCode=0 Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.888906 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91529a48-98bc-4df9-b890-c6ad4eb96e2b","Type":"ContainerDied","Data":"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f"} Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.889073 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91529a48-98bc-4df9-b890-c6ad4eb96e2b","Type":"ContainerDied","Data":"149ddf3f4353a99acbac73b6938ea59341e59ee316d91778467708db520e6c22"} Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.889199 4735 scope.go:117] "RemoveContainer" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.889284 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.891298 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf00fd0b-9de0-4726-ae79-94596a39fffe","Type":"ContainerDied","Data":"03b9ed05a18a92f98bff483a455fc2610b2b04e0fdbc1867b759cb88b2fa245c"} Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.891447 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.916735 4735 scope.go:117] "RemoveContainer" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" Jan 31 15:17:46 crc kubenswrapper[4735]: E0131 15:17:46.917819 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f\": container with ID starting with 44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f not found: ID does not exist" containerID="44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.917938 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f"} err="failed to get container status \"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f\": rpc error: code = NotFound desc = could not find container \"44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f\": container with ID starting with 44776724bc24a64747a3e7698362badddbe5da19b497f1903f98836fa158360f not found: ID does not exist" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.917972 4735 scope.go:117] "RemoveContainer" containerID="323fb601661e4add284c619f2e09a68cb2a2c6496ad7bba5a5e3aa936d97a678" Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.974666 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.994187 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:46 crc kubenswrapper[4735]: I0131 15:17:46.998328 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.008491 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.016077 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: E0131 15:17:47.016552 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" containerName="kube-state-metrics" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.016574 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" containerName="kube-state-metrics" Jan 31 15:17:47 crc kubenswrapper[4735]: E0131 15:17:47.016601 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerName="nova-scheduler-scheduler" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.016608 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerName="nova-scheduler-scheduler" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.016798 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" containerName="kube-state-metrics" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.016818 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" containerName="nova-scheduler-scheduler" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.017486 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.019869 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.022508 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.023350 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.032803 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.035032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.037166 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.040785 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078475 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078574 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqq5\" (UniqueName: \"kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078599 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078625 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078661 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078838 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.078899 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmkw\" (UniqueName: \"kubernetes.io/projected/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-api-access-cpmkw\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180169 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssqq5\" (UniqueName: \"kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180473 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180593 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmkw\" (UniqueName: \"kubernetes.io/projected/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-api-access-cpmkw\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.180647 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.185471 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.186616 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.187405 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.187499 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.195999 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.207292 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmkw\" (UniqueName: \"kubernetes.io/projected/336b74ed-3ea3-4963-9497-a02b65b80a3e-kube-api-access-cpmkw\") pod \"kube-state-metrics-0\" (UID: \"336b74ed-3ea3-4963-9497-a02b65b80a3e\") " pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.207933 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssqq5\" (UniqueName: \"kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5\") pod \"nova-scheduler-0\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.246343 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.291355 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.291765 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-central-agent" containerID="cri-o://3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4" gracePeriod=30 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.291798 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="sg-core" containerID="cri-o://7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52" gracePeriod=30 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.291888 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="proxy-httpd" containerID="cri-o://22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7" gracePeriod=30 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.291886 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-notification-agent" containerID="cri-o://dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8" gracePeriod=30 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.336125 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.352873 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.566729 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91529a48-98bc-4df9-b890-c6ad4eb96e2b" path="/var/lib/kubelet/pods/91529a48-98bc-4df9-b890-c6ad4eb96e2b/volumes" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.569714 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf00fd0b-9de0-4726-ae79-94596a39fffe" path="/var/lib/kubelet/pods/bf00fd0b-9de0-4726-ae79-94596a39fffe/volumes" Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.903610 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919584 4735 generic.go:334] "Generic (PLEG): container finished" podID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerID="22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7" exitCode=0 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919610 4735 generic.go:334] "Generic (PLEG): container finished" podID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerID="7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52" exitCode=2 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919619 4735 generic.go:334] "Generic (PLEG): container finished" podID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerID="3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4" exitCode=0 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919670 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerDied","Data":"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7"} Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919694 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerDied","Data":"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52"} Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.919705 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerDied","Data":"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4"} Jan 31 15:17:47 crc kubenswrapper[4735]: W0131 15:17:47.923507 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod336b74ed_3ea3_4963_9497_a02b65b80a3e.slice/crio-60d990b9243d4131b95b83b63208a29a945a18b722c6f152116d8022c420ffbd WatchSource:0}: Error finding container 60d990b9243d4131b95b83b63208a29a945a18b722c6f152116d8022c420ffbd: Status 404 returned error can't find the container with id 60d990b9243d4131b95b83b63208a29a945a18b722c6f152116d8022c420ffbd Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.929801 4735 generic.go:334] "Generic (PLEG): container finished" podID="00cc8be9-5204-43ea-a723-a25e411adecc" containerID="1fce3456a9d680dd9a35fcc2c2c0d4c58148a4a8fc1249820d4580d12bb0d074" exitCode=0 Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.929902 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerDied","Data":"1fce3456a9d680dd9a35fcc2c2c0d4c58148a4a8fc1249820d4580d12bb0d074"} Jan 31 15:17:47 crc kubenswrapper[4735]: I0131 15:17:47.931010 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.038627 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.102597 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs\") pod \"00cc8be9-5204-43ea-a723-a25e411adecc\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.102701 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data\") pod \"00cc8be9-5204-43ea-a723-a25e411adecc\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.102871 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle\") pod \"00cc8be9-5204-43ea-a723-a25e411adecc\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.103006 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7r8\" (UniqueName: \"kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8\") pod \"00cc8be9-5204-43ea-a723-a25e411adecc\" (UID: \"00cc8be9-5204-43ea-a723-a25e411adecc\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.103173 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs" (OuterVolumeSpecName: "logs") pod "00cc8be9-5204-43ea-a723-a25e411adecc" (UID: "00cc8be9-5204-43ea-a723-a25e411adecc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.103714 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00cc8be9-5204-43ea-a723-a25e411adecc-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.108753 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8" (OuterVolumeSpecName: "kube-api-access-sn7r8") pod "00cc8be9-5204-43ea-a723-a25e411adecc" (UID: "00cc8be9-5204-43ea-a723-a25e411adecc"). InnerVolumeSpecName "kube-api-access-sn7r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.129537 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data" (OuterVolumeSpecName: "config-data") pod "00cc8be9-5204-43ea-a723-a25e411adecc" (UID: "00cc8be9-5204-43ea-a723-a25e411adecc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.131819 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00cc8be9-5204-43ea-a723-a25e411adecc" (UID: "00cc8be9-5204-43ea-a723-a25e411adecc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.209971 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.210010 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn7r8\" (UniqueName: \"kubernetes.io/projected/00cc8be9-5204-43ea-a723-a25e411adecc-kube-api-access-sn7r8\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.210024 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00cc8be9-5204-43ea-a723-a25e411adecc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.500396 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.572090 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.572228 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619482 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619531 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619610 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhknz\" (UniqueName: \"kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619633 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619660 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619688 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.619732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts\") pod \"bca15585-7b5d-4b8c-a2bb-4be714292716\" (UID: \"bca15585-7b5d-4b8c-a2bb-4be714292716\") " Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.620728 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.621061 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.627390 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts" (OuterVolumeSpecName: "scripts") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.628782 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz" (OuterVolumeSpecName: "kube-api-access-zhknz") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "kube-api-access-zhknz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.656435 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.719614 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721644 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhknz\" (UniqueName: \"kubernetes.io/projected/bca15585-7b5d-4b8c-a2bb-4be714292716-kube-api-access-zhknz\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721676 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721685 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721695 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bca15585-7b5d-4b8c-a2bb-4be714292716-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721706 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.721717 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.742158 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data" (OuterVolumeSpecName: "config-data") pod "bca15585-7b5d-4b8c-a2bb-4be714292716" (UID: "bca15585-7b5d-4b8c-a2bb-4be714292716"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.823689 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15585-7b5d-4b8c-a2bb-4be714292716-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.948352 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"336b74ed-3ea3-4963-9497-a02b65b80a3e","Type":"ContainerStarted","Data":"69020f5a3de4b092c95ed0308de562ce21dca077516f71375b97d58d36a28fc5"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.948404 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"336b74ed-3ea3-4963-9497-a02b65b80a3e","Type":"ContainerStarted","Data":"60d990b9243d4131b95b83b63208a29a945a18b722c6f152116d8022c420ffbd"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.949726 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.952060 4735 generic.go:334] "Generic (PLEG): container finished" podID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerID="dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8" exitCode=0 Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.952112 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerDied","Data":"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.952137 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bca15585-7b5d-4b8c-a2bb-4be714292716","Type":"ContainerDied","Data":"0ad7f47ebdd1ebdb96a2a4bf7ce6376c5f9bc04bef54c19b11b8c8530b200455"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.952157 4735 scope.go:117] "RemoveContainer" containerID="22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.952266 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.960316 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2deb7f44-da52-49a0-a43f-36341a898af4","Type":"ContainerStarted","Data":"dcac7202e44b135ec2115ad9bbc507ab1c47bf89ea8d6dfc412a7b825fbbf87a"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.960348 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2deb7f44-da52-49a0-a43f-36341a898af4","Type":"ContainerStarted","Data":"ec7f037924ef3da5e6e34e39d46f399efce45d085825b4634ae95959014acf9f"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.969257 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.969323 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"00cc8be9-5204-43ea-a723-a25e411adecc","Type":"ContainerDied","Data":"67cad16ffe1a1f3f61dcd5bf651054c70e460e2a199613197c91be706f801a74"} Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.984001 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.627422686 podStartE2EDuration="2.983981147s" podCreationTimestamp="2026-01-31 15:17:46 +0000 UTC" firstStartedPulling="2026-01-31 15:17:47.925603923 +0000 UTC m=+1153.698932965" lastFinishedPulling="2026-01-31 15:17:48.282162384 +0000 UTC m=+1154.055491426" observedRunningTime="2026-01-31 15:17:48.970932217 +0000 UTC m=+1154.744261269" watchObservedRunningTime="2026-01-31 15:17:48.983981147 +0000 UTC m=+1154.757310199" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.984616 4735 scope.go:117] "RemoveContainer" containerID="7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52" Jan 31 15:17:48 crc kubenswrapper[4735]: I0131 15:17:48.997734 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.997718566 podStartE2EDuration="2.997718566s" podCreationTimestamp="2026-01-31 15:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:48.988192936 +0000 UTC m=+1154.761521978" watchObservedRunningTime="2026-01-31 15:17:48.997718566 +0000 UTC m=+1154.771047608" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.047809 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.054126 4735 scope.go:117] "RemoveContainer" containerID="dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.074963 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.100887 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.102845 4735 scope.go:117] "RemoveContainer" containerID="3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.106558 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115285 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-central-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115322 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-central-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115336 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-notification-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115346 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-notification-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115435 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="proxy-httpd" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115447 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="proxy-httpd" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115474 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="sg-core" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115485 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="sg-core" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115506 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-log" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115518 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-log" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.115533 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-api" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115543 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-api" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115762 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-log" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115782 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-notification-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115806 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" containerName="nova-api-api" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115818 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="ceilometer-central-agent" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115837 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="proxy-httpd" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.115856 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" containerName="sg-core" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.117915 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.119589 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.119966 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.121403 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.125253 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.128823 4735 scope.go:117] "RemoveContainer" containerID="22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.129100 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7\": container with ID starting with 22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7 not found: ID does not exist" containerID="22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129127 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7"} err="failed to get container status \"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7\": rpc error: code = NotFound desc = could not find container \"22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7\": container with ID starting with 22254dad57dcb63c3c245a8fb8428617f98240759e8a0e1b216fd44ba5a316e7 not found: ID does not exist" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129146 4735 scope.go:117] "RemoveContainer" containerID="7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.129362 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52\": container with ID starting with 7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52 not found: ID does not exist" containerID="7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129378 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52"} err="failed to get container status \"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52\": rpc error: code = NotFound desc = could not find container \"7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52\": container with ID starting with 7c01396bcb5815976caa3dcbf47857e2b9b207336d7a6a573b34f3e939a38d52 not found: ID does not exist" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129390 4735 scope.go:117] "RemoveContainer" containerID="dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.129601 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8\": container with ID starting with dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8 not found: ID does not exist" containerID="dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129616 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8"} err="failed to get container status \"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8\": rpc error: code = NotFound desc = could not find container \"dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8\": container with ID starting with dd68fb53dff37da002c3b5b520c08ff0809c421da8e232fdb1027ec1c79cb2a8 not found: ID does not exist" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129628 4735 scope.go:117] "RemoveContainer" containerID="3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4" Jan 31 15:17:49 crc kubenswrapper[4735]: E0131 15:17:49.129778 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4\": container with ID starting with 3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4 not found: ID does not exist" containerID="3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129796 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4"} err="failed to get container status \"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4\": rpc error: code = NotFound desc = could not find container \"3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4\": container with ID starting with 3c0bca9620092850d6af45cd500e571858482a0a1c57abfad13272067c69daa4 not found: ID does not exist" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.129810 4735 scope.go:117] "RemoveContainer" containerID="1fce3456a9d680dd9a35fcc2c2c0d4c58148a4a8fc1249820d4580d12bb0d074" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.137521 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.150444 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.151940 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.156825 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.159619 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.166783 4735 scope.go:117] "RemoveContainer" containerID="31ac21e0cbc815d22d592926db58da88c01c6ef087560513302ae2f50339b350" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234172 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234227 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234249 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234293 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7b4\" (UniqueName: \"kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234312 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234331 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234359 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234380 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234409 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234436 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284wr\" (UniqueName: \"kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234458 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.234485 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336173 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7b4\" (UniqueName: \"kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336302 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336350 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336462 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336518 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336565 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336603 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284wr\" (UniqueName: \"kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336659 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336724 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336890 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.336973 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.337018 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.338742 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.339127 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.340344 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.341817 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.344204 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.344918 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.345057 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.354091 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.355116 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.356356 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.356915 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7b4\" (UniqueName: \"kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4\") pod \"nova-api-0\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.358682 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284wr\" (UniqueName: \"kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr\") pod \"ceilometer-0\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.457516 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.476929 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.551114 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cc8be9-5204-43ea-a723-a25e411adecc" path="/var/lib/kubelet/pods/00cc8be9-5204-43ea-a723-a25e411adecc/volumes" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.552355 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca15585-7b5d-4b8c-a2bb-4be714292716" path="/var/lib/kubelet/pods/bca15585-7b5d-4b8c-a2bb-4be714292716/volumes" Jan 31 15:17:49 crc kubenswrapper[4735]: I0131 15:17:49.971507 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:17:50 crc kubenswrapper[4735]: W0131 15:17:50.016172 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15cd1ce6_3a10_4624_b2bd_fa0d77b68060.slice/crio-66194855be5a230af346b8d1e08ddac82525a056b8accccf132f135ad31084c8 WatchSource:0}: Error finding container 66194855be5a230af346b8d1e08ddac82525a056b8accccf132f135ad31084c8: Status 404 returned error can't find the container with id 66194855be5a230af346b8d1e08ddac82525a056b8accccf132f135ad31084c8 Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.017394 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.995170 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerStarted","Data":"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087"} Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.995688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerStarted","Data":"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546"} Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.995702 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerStarted","Data":"66194855be5a230af346b8d1e08ddac82525a056b8accccf132f135ad31084c8"} Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.998983 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerStarted","Data":"525206f633aadbef230e2309eb375151f9953cf2e0ea18d90a08ae4597a0e8d7"} Jan 31 15:17:50 crc kubenswrapper[4735]: I0131 15:17:50.999020 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerStarted","Data":"c840debfb1f0415916cf73fe1b7d201c83faa75dbbe7d30de38450b5f4968877"} Jan 31 15:17:51 crc kubenswrapper[4735]: I0131 15:17:51.027101 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.027083475 podStartE2EDuration="2.027083475s" podCreationTimestamp="2026-01-31 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:17:51.019224732 +0000 UTC m=+1156.792553774" watchObservedRunningTime="2026-01-31 15:17:51.027083475 +0000 UTC m=+1156.800412517" Jan 31 15:17:52 crc kubenswrapper[4735]: I0131 15:17:52.020283 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerStarted","Data":"ae87a2cde5f64ce5cfd58a6963c13949829f8b2e928f96160e98fdf032f04617"} Jan 31 15:17:52 crc kubenswrapper[4735]: I0131 15:17:52.352937 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 15:17:53 crc kubenswrapper[4735]: I0131 15:17:53.030737 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerStarted","Data":"4f26efe6a8fd761a7f5832813360783bdc29f0bb07e92cb9fc9b770fd031d8bc"} Jan 31 15:17:53 crc kubenswrapper[4735]: I0131 15:17:53.571765 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 15:17:53 crc kubenswrapper[4735]: I0131 15:17:53.571858 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 15:17:54 crc kubenswrapper[4735]: I0131 15:17:54.584592 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 15:17:54 crc kubenswrapper[4735]: I0131 15:17:54.586564 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 15:17:55 crc kubenswrapper[4735]: I0131 15:17:55.052325 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerStarted","Data":"85cb396129f367f85949d208795f4ac2850d08fa923c6f50275aadbfdcd80957"} Jan 31 15:17:55 crc kubenswrapper[4735]: I0131 15:17:55.052633 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:17:55 crc kubenswrapper[4735]: I0131 15:17:55.076142 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.895002899 podStartE2EDuration="6.076120989s" podCreationTimestamp="2026-01-31 15:17:49 +0000 UTC" firstStartedPulling="2026-01-31 15:17:49.979329482 +0000 UTC m=+1155.752658524" lastFinishedPulling="2026-01-31 15:17:54.160447542 +0000 UTC m=+1159.933776614" observedRunningTime="2026-01-31 15:17:55.071705703 +0000 UTC m=+1160.845034746" watchObservedRunningTime="2026-01-31 15:17:55.076120989 +0000 UTC m=+1160.849450031" Jan 31 15:17:57 crc kubenswrapper[4735]: I0131 15:17:57.345246 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 15:17:57 crc kubenswrapper[4735]: I0131 15:17:57.353818 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 15:17:57 crc kubenswrapper[4735]: I0131 15:17:57.402984 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 15:17:58 crc kubenswrapper[4735]: I0131 15:17:58.121151 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 15:17:59 crc kubenswrapper[4735]: I0131 15:17:59.478320 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:17:59 crc kubenswrapper[4735]: I0131 15:17:59.478593 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:18:00 crc kubenswrapper[4735]: I0131 15:18:00.562044 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:00 crc kubenswrapper[4735]: I0131 15:18:00.562391 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:03 crc kubenswrapper[4735]: I0131 15:18:03.578295 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 15:18:03 crc kubenswrapper[4735]: I0131 15:18:03.578663 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 15:18:03 crc kubenswrapper[4735]: I0131 15:18:03.585082 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 15:18:03 crc kubenswrapper[4735]: I0131 15:18:03.588349 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.165238 4735 generic.go:334] "Generic (PLEG): container finished" podID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" containerID="c5bfc53fb7b8f0ee0d443dfea905b1af52ad046d79db9fa4f9144e106f6cc258" exitCode=137 Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.165294 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1f9b8c3-b871-474d-92c4-9421274fdbbc","Type":"ContainerDied","Data":"c5bfc53fb7b8f0ee0d443dfea905b1af52ad046d79db9fa4f9144e106f6cc258"} Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.166532 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1f9b8c3-b871-474d-92c4-9421274fdbbc","Type":"ContainerDied","Data":"1cfbd95c2c6418046f28af9f871e6752df67070c06ee4f3781e36a2bc82aa938"} Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.166569 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cfbd95c2c6418046f28af9f871e6752df67070c06ee4f3781e36a2bc82aa938" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.202815 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.268156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data\") pod \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.268232 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle\") pod \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.268378 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmg44\" (UniqueName: \"kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44\") pod \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\" (UID: \"e1f9b8c3-b871-474d-92c4-9421274fdbbc\") " Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.275696 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44" (OuterVolumeSpecName: "kube-api-access-vmg44") pod "e1f9b8c3-b871-474d-92c4-9421274fdbbc" (UID: "e1f9b8c3-b871-474d-92c4-9421274fdbbc"). InnerVolumeSpecName "kube-api-access-vmg44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.302041 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data" (OuterVolumeSpecName: "config-data") pod "e1f9b8c3-b871-474d-92c4-9421274fdbbc" (UID: "e1f9b8c3-b871-474d-92c4-9421274fdbbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.303318 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1f9b8c3-b871-474d-92c4-9421274fdbbc" (UID: "e1f9b8c3-b871-474d-92c4-9421274fdbbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.372773 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmg44\" (UniqueName: \"kubernetes.io/projected/e1f9b8c3-b871-474d-92c4-9421274fdbbc-kube-api-access-vmg44\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.372818 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:06 crc kubenswrapper[4735]: I0131 15:18:06.372838 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9b8c3-b871-474d-92c4-9421274fdbbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.178316 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.229301 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.237395 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.258262 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:18:07 crc kubenswrapper[4735]: E0131 15:18:07.258721 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.258747 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.259003 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.263587 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.267362 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.267819 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.271188 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.292599 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.345697 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.345794 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.345856 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.346965 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.347083 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c" gracePeriod=600 Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.398394 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.398462 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxvt\" (UniqueName: \"kubernetes.io/projected/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-kube-api-access-mwxvt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.398489 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.398507 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.398880 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.500822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.500872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxvt\" (UniqueName: \"kubernetes.io/projected/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-kube-api-access-mwxvt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.500895 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.500915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.501032 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.505560 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.506508 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.508012 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.511509 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.527064 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxvt\" (UniqueName: \"kubernetes.io/projected/0ecaa245-bfd5-42b9-b10f-117b0dfef5cb-kube-api-access-mwxvt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.550382 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f9b8c3-b871-474d-92c4-9421274fdbbc" path="/var/lib/kubelet/pods/e1f9b8c3-b871-474d-92c4-9421274fdbbc/volumes" Jan 31 15:18:07 crc kubenswrapper[4735]: I0131 15:18:07.586134 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.078022 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.193573 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb","Type":"ContainerStarted","Data":"817682ef8bb2b3d60094aad42d798b381e91e11ba5dbabf42e51192d78f91ac2"} Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.196516 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c" exitCode=0 Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.196547 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c"} Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.196565 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53"} Jan 31 15:18:08 crc kubenswrapper[4735]: I0131 15:18:08.196581 4735 scope.go:117] "RemoveContainer" containerID="44d311243b748398a9da0dd03084850d58b11fe86f145873e87ba9bc40d33264" Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.211196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ecaa245-bfd5-42b9-b10f-117b0dfef5cb","Type":"ContainerStarted","Data":"1376e3da458f5e48c9a9b9980a4771fac37504e9d69a32668e4efab6840516a5"} Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.240908 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.240879963 podStartE2EDuration="2.240879963s" podCreationTimestamp="2026-01-31 15:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:09.228414422 +0000 UTC m=+1175.001743514" watchObservedRunningTime="2026-01-31 15:18:09.240879963 +0000 UTC m=+1175.014209045" Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.483949 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.485316 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.491250 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 15:18:09 crc kubenswrapper[4735]: I0131 15:18:09.494406 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.221475 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.226036 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.436031 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.451659 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.461230 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569141 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569178 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75d6\" (UniqueName: \"kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569251 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569314 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.569342 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.670562 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.670609 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75d6\" (UniqueName: \"kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.670639 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.670703 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.670822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.671226 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.671710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.671736 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.671787 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.671898 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.672163 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.697621 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75d6\" (UniqueName: \"kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6\") pod \"dnsmasq-dns-89c5cd4d5-tft9k\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:10 crc kubenswrapper[4735]: I0131 15:18:10.785277 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:11 crc kubenswrapper[4735]: I0131 15:18:11.297882 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.154409 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.155052 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-central-agent" containerID="cri-o://525206f633aadbef230e2309eb375151f9953cf2e0ea18d90a08ae4597a0e8d7" gracePeriod=30 Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.155172 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="sg-core" containerID="cri-o://4f26efe6a8fd761a7f5832813360783bdc29f0bb07e92cb9fc9b770fd031d8bc" gracePeriod=30 Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.155214 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-notification-agent" containerID="cri-o://ae87a2cde5f64ce5cfd58a6963c13949829f8b2e928f96160e98fdf032f04617" gracePeriod=30 Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.155190 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="proxy-httpd" containerID="cri-o://85cb396129f367f85949d208795f4ac2850d08fa923c6f50275aadbfdcd80957" gracePeriod=30 Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.161384 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.242018 4735 generic.go:334] "Generic (PLEG): container finished" podID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerID="37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98" exitCode=0 Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.242235 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" event={"ID":"7e03a033-a1e4-4008-93d8-02ade8bd23dd","Type":"ContainerDied","Data":"37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98"} Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.242644 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" event={"ID":"7e03a033-a1e4-4008-93d8-02ade8bd23dd","Type":"ContainerStarted","Data":"93b8451875a17499c1d50e22463ec01db82e9a73b7d2f1447c1a5e9193ea4807"} Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.586834 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:12 crc kubenswrapper[4735]: I0131 15:18:12.856022 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256703 4735 generic.go:334] "Generic (PLEG): container finished" podID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerID="85cb396129f367f85949d208795f4ac2850d08fa923c6f50275aadbfdcd80957" exitCode=0 Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256747 4735 generic.go:334] "Generic (PLEG): container finished" podID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerID="4f26efe6a8fd761a7f5832813360783bdc29f0bb07e92cb9fc9b770fd031d8bc" exitCode=2 Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256763 4735 generic.go:334] "Generic (PLEG): container finished" podID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerID="525206f633aadbef230e2309eb375151f9953cf2e0ea18d90a08ae4597a0e8d7" exitCode=0 Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256826 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerDied","Data":"85cb396129f367f85949d208795f4ac2850d08fa923c6f50275aadbfdcd80957"} Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256941 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerDied","Data":"4f26efe6a8fd761a7f5832813360783bdc29f0bb07e92cb9fc9b770fd031d8bc"} Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.256974 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerDied","Data":"525206f633aadbef230e2309eb375151f9953cf2e0ea18d90a08ae4597a0e8d7"} Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.259307 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" event={"ID":"7e03a033-a1e4-4008-93d8-02ade8bd23dd","Type":"ContainerStarted","Data":"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b"} Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.259542 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-log" containerID="cri-o://ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546" gracePeriod=30 Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.259592 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-api" containerID="cri-o://46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087" gracePeriod=30 Jan 31 15:18:13 crc kubenswrapper[4735]: I0131 15:18:13.295351 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" podStartSLOduration=3.295325118 podStartE2EDuration="3.295325118s" podCreationTimestamp="2026-01-31 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:13.283252507 +0000 UTC m=+1179.056581589" watchObservedRunningTime="2026-01-31 15:18:13.295325118 +0000 UTC m=+1179.068654190" Jan 31 15:18:14 crc kubenswrapper[4735]: I0131 15:18:14.270878 4735 generic.go:334] "Generic (PLEG): container finished" podID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerID="ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546" exitCode=143 Jan 31 15:18:14 crc kubenswrapper[4735]: I0131 15:18:14.270959 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerDied","Data":"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546"} Jan 31 15:18:14 crc kubenswrapper[4735]: I0131 15:18:14.271413 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:16 crc kubenswrapper[4735]: I0131 15:18:16.888151 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.010490 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle\") pod \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.010687 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7b4\" (UniqueName: \"kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4\") pod \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.010741 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs\") pod \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.010964 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data\") pod \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\" (UID: \"15cd1ce6-3a10-4624-b2bd-fa0d77b68060\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.011909 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs" (OuterVolumeSpecName: "logs") pod "15cd1ce6-3a10-4624-b2bd-fa0d77b68060" (UID: "15cd1ce6-3a10-4624-b2bd-fa0d77b68060"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.020025 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4" (OuterVolumeSpecName: "kube-api-access-zl7b4") pod "15cd1ce6-3a10-4624-b2bd-fa0d77b68060" (UID: "15cd1ce6-3a10-4624-b2bd-fa0d77b68060"). InnerVolumeSpecName "kube-api-access-zl7b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.043172 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data" (OuterVolumeSpecName: "config-data") pod "15cd1ce6-3a10-4624-b2bd-fa0d77b68060" (UID: "15cd1ce6-3a10-4624-b2bd-fa0d77b68060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.057079 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15cd1ce6-3a10-4624-b2bd-fa0d77b68060" (UID: "15cd1ce6-3a10-4624-b2bd-fa0d77b68060"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.113753 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.113785 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.113797 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl7b4\" (UniqueName: \"kubernetes.io/projected/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-kube-api-access-zl7b4\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.113808 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cd1ce6-3a10-4624-b2bd-fa0d77b68060-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.310715 4735 generic.go:334] "Generic (PLEG): container finished" podID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerID="46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087" exitCode=0 Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.310856 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.310868 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerDied","Data":"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087"} Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.310922 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15cd1ce6-3a10-4624-b2bd-fa0d77b68060","Type":"ContainerDied","Data":"66194855be5a230af346b8d1e08ddac82525a056b8accccf132f135ad31084c8"} Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.310940 4735 scope.go:117] "RemoveContainer" containerID="46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.328713 4735 generic.go:334] "Generic (PLEG): container finished" podID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerID="ae87a2cde5f64ce5cfd58a6963c13949829f8b2e928f96160e98fdf032f04617" exitCode=0 Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.328762 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerDied","Data":"ae87a2cde5f64ce5cfd58a6963c13949829f8b2e928f96160e98fdf032f04617"} Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.334348 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.351464 4735 scope.go:117] "RemoveContainer" containerID="ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.354246 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.381567 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.397560 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.398613 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="proxy-httpd" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.398709 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="proxy-httpd" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.398807 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-api" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.398880 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-api" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.399045 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-log" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399124 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-log" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.399207 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="sg-core" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399280 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="sg-core" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.399370 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-notification-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399470 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-notification-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.399562 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-central-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399634 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-central-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399894 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="proxy-httpd" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.399999 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-log" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.400084 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-central-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.400164 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="sg-core" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.400256 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" containerName="nova-api-api" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.400341 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" containerName="ceilometer-notification-agent" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.402467 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.409322 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.409526 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.409548 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419414 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419771 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419815 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419885 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419914 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419936 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-284wr\" (UniqueName: \"kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.419970 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.420030 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data\") pod \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\" (UID: \"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe\") " Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.421729 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.421844 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.428415 4735 scope.go:117] "RemoveContainer" containerID="46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.430601 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts" (OuterVolumeSpecName: "scripts") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.430727 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr" (OuterVolumeSpecName: "kube-api-access-284wr") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "kube-api-access-284wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.432663 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087\": container with ID starting with 46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087 not found: ID does not exist" containerID="46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.432712 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087"} err="failed to get container status \"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087\": rpc error: code = NotFound desc = could not find container \"46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087\": container with ID starting with 46c123c64795297f9ab687e99adbca36b6021aba5958dfaff1a4f8493db79087 not found: ID does not exist" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.432737 4735 scope.go:117] "RemoveContainer" containerID="ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.435265 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:17 crc kubenswrapper[4735]: E0131 15:18:17.435610 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546\": container with ID starting with ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546 not found: ID does not exist" containerID="ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.435705 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546"} err="failed to get container status \"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546\": rpc error: code = NotFound desc = could not find container \"ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546\": container with ID starting with ece7b5d6ac83466ba7236f7470c314a505b3b0d5a6bfe930a85c66f074f4d546 not found: ID does not exist" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.475647 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.481988 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.517947 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522252 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522314 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522533 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsfs\" (UniqueName: \"kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522826 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522945 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522964 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522975 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522986 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.522994 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-284wr\" (UniqueName: \"kubernetes.io/projected/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-kube-api-access-284wr\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.523004 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.523013 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.542112 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data" (OuterVolumeSpecName: "config-data") pod "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" (UID: "1c8ad78d-d4ae-477b-ba83-28fc3f132cbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.558155 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cd1ce6-3a10-4624-b2bd-fa0d77b68060" path="/var/lib/kubelet/pods/15cd1ce6-3a10-4624-b2bd-fa0d77b68060/volumes" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.587379 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.607298 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625004 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsfs\" (UniqueName: \"kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625128 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625189 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625232 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625273 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625322 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.625397 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.626334 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.629546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.630622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.631103 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.633110 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.641913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsfs\" (UniqueName: \"kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs\") pod \"nova-api-0\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " pod="openstack/nova-api-0" Jan 31 15:18:17 crc kubenswrapper[4735]: I0131 15:18:17.741049 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.227118 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.340941 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerStarted","Data":"37671c7384574015a9396acd05e987325a10416e0e3ded5eca6ecce6d93629c7"} Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.344855 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.344890 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c8ad78d-d4ae-477b-ba83-28fc3f132cbe","Type":"ContainerDied","Data":"c840debfb1f0415916cf73fe1b7d201c83faa75dbbe7d30de38450b5f4968877"} Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.345003 4735 scope.go:117] "RemoveContainer" containerID="85cb396129f367f85949d208795f4ac2850d08fa923c6f50275aadbfdcd80957" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.370469 4735 scope.go:117] "RemoveContainer" containerID="4f26efe6a8fd761a7f5832813360783bdc29f0bb07e92cb9fc9b770fd031d8bc" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.374082 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.377259 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.383290 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.408998 4735 scope.go:117] "RemoveContainer" containerID="ae87a2cde5f64ce5cfd58a6963c13949829f8b2e928f96160e98fdf032f04617" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.409132 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.411486 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.415288 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.415637 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.415827 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.450231 4735 scope.go:117] "RemoveContainer" containerID="525206f633aadbef230e2309eb375151f9953cf2e0ea18d90a08ae4597a0e8d7" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.457835 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.547118 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-config-data\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.547160 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.547200 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.547223 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.547257 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.550504 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vrvv\" (UniqueName: \"kubernetes.io/projected/3025f8d1-e415-4db4-815a-97b6ef8d09dc-kube-api-access-7vrvv\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.550563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-scripts\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.550624 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.572553 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lddf6"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.573744 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.580242 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.580379 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.585890 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lddf6"] Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652562 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652638 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652676 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652736 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652757 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vrvv\" (UniqueName: \"kubernetes.io/projected/3025f8d1-e415-4db4-815a-97b6ef8d09dc-kube-api-access-7vrvv\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-scripts\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.652944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.653018 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-config-data\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.653092 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-log-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.653550 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3025f8d1-e415-4db4-815a-97b6ef8d09dc-run-httpd\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.662050 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-scripts\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.662569 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.663207 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.663833 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.667028 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3025f8d1-e415-4db4-815a-97b6ef8d09dc-config-data\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.668949 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vrvv\" (UniqueName: \"kubernetes.io/projected/3025f8d1-e415-4db4-815a-97b6ef8d09dc-kube-api-access-7vrvv\") pod \"ceilometer-0\" (UID: \"3025f8d1-e415-4db4-815a-97b6ef8d09dc\") " pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.733380 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.761242 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgc2\" (UniqueName: \"kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.761350 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.761495 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.761557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.863411 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgc2\" (UniqueName: \"kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.864286 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.865003 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.865072 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.871769 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.871769 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.874680 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:18 crc kubenswrapper[4735]: I0131 15:18:18.878569 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgc2\" (UniqueName: \"kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2\") pod \"nova-cell1-cell-mapping-lddf6\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.100096 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.187571 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 15:18:19 crc kubenswrapper[4735]: W0131 15:18:19.188728 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3025f8d1_e415_4db4_815a_97b6ef8d09dc.slice/crio-3fcbdfe335aeefe13ef296733b5a6467d61593e0b830c7040bf7da5df7456aad WatchSource:0}: Error finding container 3fcbdfe335aeefe13ef296733b5a6467d61593e0b830c7040bf7da5df7456aad: Status 404 returned error can't find the container with id 3fcbdfe335aeefe13ef296733b5a6467d61593e0b830c7040bf7da5df7456aad Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.190998 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.355093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerStarted","Data":"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e"} Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.355332 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerStarted","Data":"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869"} Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.361038 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3025f8d1-e415-4db4-815a-97b6ef8d09dc","Type":"ContainerStarted","Data":"3fcbdfe335aeefe13ef296733b5a6467d61593e0b830c7040bf7da5df7456aad"} Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.389367 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3893481579999998 podStartE2EDuration="2.389348158s" podCreationTimestamp="2026-01-31 15:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:19.378018098 +0000 UTC m=+1185.151347140" watchObservedRunningTime="2026-01-31 15:18:19.389348158 +0000 UTC m=+1185.162677210" Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.551579 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8ad78d-d4ae-477b-ba83-28fc3f132cbe" path="/var/lib/kubelet/pods/1c8ad78d-d4ae-477b-ba83-28fc3f132cbe/volumes" Jan 31 15:18:19 crc kubenswrapper[4735]: I0131 15:18:19.588040 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lddf6"] Jan 31 15:18:19 crc kubenswrapper[4735]: W0131 15:18:19.593080 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod014134de_eb91_414a_a8a8_6ffe3cae5e72.slice/crio-2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1 WatchSource:0}: Error finding container 2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1: Status 404 returned error can't find the container with id 2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1 Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.382092 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3025f8d1-e415-4db4-815a-97b6ef8d09dc","Type":"ContainerStarted","Data":"8313a462ced09f8bb03ceeb9819916d7d7f7b3d67da877039d0b053c87cd9959"} Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.386603 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lddf6" event={"ID":"014134de-eb91-414a-a8a8-6ffe3cae5e72","Type":"ContainerStarted","Data":"961e2e391877e8b195fc8a0c8d463824ce20500e9ec683ddf6d7b7f308c17a87"} Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.386682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lddf6" event={"ID":"014134de-eb91-414a-a8a8-6ffe3cae5e72","Type":"ContainerStarted","Data":"2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1"} Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.404645 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lddf6" podStartSLOduration=2.404630574 podStartE2EDuration="2.404630574s" podCreationTimestamp="2026-01-31 15:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:20.40272388 +0000 UTC m=+1186.176052942" watchObservedRunningTime="2026-01-31 15:18:20.404630574 +0000 UTC m=+1186.177959616" Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.787632 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.858314 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:18:20 crc kubenswrapper[4735]: I0131 15:18:20.858661 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="dnsmasq-dns" containerID="cri-o://2b1d6086fc523bcd996138c556cfbd768e0bd50d2055029006498d8fe6b511e9" gracePeriod=10 Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.303937 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.398985 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b397911-cf10-488e-af94-e0115c62b95b" containerID="2b1d6086fc523bcd996138c556cfbd768e0bd50d2055029006498d8fe6b511e9" exitCode=0 Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.399058 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" event={"ID":"2b397911-cf10-488e-af94-e0115c62b95b","Type":"ContainerDied","Data":"2b1d6086fc523bcd996138c556cfbd768e0bd50d2055029006498d8fe6b511e9"} Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.407611 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3025f8d1-e415-4db4-815a-97b6ef8d09dc","Type":"ContainerStarted","Data":"e956e5b6c08d785f7d5314fce9bed325e18b2b70bdf1d9645db1bf894076cec7"} Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.830002 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.953756 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.954020 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.954081 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.954145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.954179 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.954217 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djn2p\" (UniqueName: \"kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p\") pod \"2b397911-cf10-488e-af94-e0115c62b95b\" (UID: \"2b397911-cf10-488e-af94-e0115c62b95b\") " Jan 31 15:18:21 crc kubenswrapper[4735]: I0131 15:18:21.960140 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p" (OuterVolumeSpecName: "kube-api-access-djn2p") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "kube-api-access-djn2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.008159 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.012835 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config" (OuterVolumeSpecName: "config") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.037907 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.047829 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.048361 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b397911-cf10-488e-af94-e0115c62b95b" (UID: "2b397911-cf10-488e-af94-e0115c62b95b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056243 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056279 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056289 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056299 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056308 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djn2p\" (UniqueName: \"kubernetes.io/projected/2b397911-cf10-488e-af94-e0115c62b95b-kube-api-access-djn2p\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.056317 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b397911-cf10-488e-af94-e0115c62b95b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.422307 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" event={"ID":"2b397911-cf10-488e-af94-e0115c62b95b","Type":"ContainerDied","Data":"23da57fb5343c8a5ab3a8f9b5051e08a0a5c3df56889bfbd23a7f14f3821f990"} Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.422356 4735 scope.go:117] "RemoveContainer" containerID="2b1d6086fc523bcd996138c556cfbd768e0bd50d2055029006498d8fe6b511e9" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.422548 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xcg8z" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.429590 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3025f8d1-e415-4db4-815a-97b6ef8d09dc","Type":"ContainerStarted","Data":"e07c08592bbb0432b03b0b7f27c1898d439e89048803205629227c002e7aea0e"} Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.443204 4735 scope.go:117] "RemoveContainer" containerID="294c687ce61d4086b42c2f6a8f20662cdd9ce785b26621e7302a517daa404e5d" Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.455008 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:18:22 crc kubenswrapper[4735]: I0131 15:18:22.467773 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xcg8z"] Jan 31 15:18:23 crc kubenswrapper[4735]: I0131 15:18:23.552821 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b397911-cf10-488e-af94-e0115c62b95b" path="/var/lib/kubelet/pods/2b397911-cf10-488e-af94-e0115c62b95b/volumes" Jan 31 15:18:24 crc kubenswrapper[4735]: I0131 15:18:24.456827 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3025f8d1-e415-4db4-815a-97b6ef8d09dc","Type":"ContainerStarted","Data":"1a458c732ad5a0366510912466c121ad390765edb83d6549893c65c2facab5ff"} Jan 31 15:18:24 crc kubenswrapper[4735]: I0131 15:18:24.457499 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 15:18:24 crc kubenswrapper[4735]: I0131 15:18:24.504324 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.037115698 podStartE2EDuration="6.504295255s" podCreationTimestamp="2026-01-31 15:18:18 +0000 UTC" firstStartedPulling="2026-01-31 15:18:19.190818323 +0000 UTC m=+1184.964147365" lastFinishedPulling="2026-01-31 15:18:23.65799787 +0000 UTC m=+1189.431326922" observedRunningTime="2026-01-31 15:18:24.492037329 +0000 UTC m=+1190.265366391" watchObservedRunningTime="2026-01-31 15:18:24.504295255 +0000 UTC m=+1190.277624337" Jan 31 15:18:25 crc kubenswrapper[4735]: I0131 15:18:25.509926 4735 generic.go:334] "Generic (PLEG): container finished" podID="014134de-eb91-414a-a8a8-6ffe3cae5e72" containerID="961e2e391877e8b195fc8a0c8d463824ce20500e9ec683ddf6d7b7f308c17a87" exitCode=0 Jan 31 15:18:25 crc kubenswrapper[4735]: I0131 15:18:25.510547 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lddf6" event={"ID":"014134de-eb91-414a-a8a8-6ffe3cae5e72","Type":"ContainerDied","Data":"961e2e391877e8b195fc8a0c8d463824ce20500e9ec683ddf6d7b7f308c17a87"} Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.893676 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.953030 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data\") pod \"014134de-eb91-414a-a8a8-6ffe3cae5e72\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.953094 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwgc2\" (UniqueName: \"kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2\") pod \"014134de-eb91-414a-a8a8-6ffe3cae5e72\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.953140 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle\") pod \"014134de-eb91-414a-a8a8-6ffe3cae5e72\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.953284 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts\") pod \"014134de-eb91-414a-a8a8-6ffe3cae5e72\" (UID: \"014134de-eb91-414a-a8a8-6ffe3cae5e72\") " Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.966866 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts" (OuterVolumeSpecName: "scripts") pod "014134de-eb91-414a-a8a8-6ffe3cae5e72" (UID: "014134de-eb91-414a-a8a8-6ffe3cae5e72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.967020 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2" (OuterVolumeSpecName: "kube-api-access-dwgc2") pod "014134de-eb91-414a-a8a8-6ffe3cae5e72" (UID: "014134de-eb91-414a-a8a8-6ffe3cae5e72"). InnerVolumeSpecName "kube-api-access-dwgc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.986987 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "014134de-eb91-414a-a8a8-6ffe3cae5e72" (UID: "014134de-eb91-414a-a8a8-6ffe3cae5e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:26 crc kubenswrapper[4735]: I0131 15:18:26.992510 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data" (OuterVolumeSpecName: "config-data") pod "014134de-eb91-414a-a8a8-6ffe3cae5e72" (UID: "014134de-eb91-414a-a8a8-6ffe3cae5e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.055039 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwgc2\" (UniqueName: \"kubernetes.io/projected/014134de-eb91-414a-a8a8-6ffe3cae5e72-kube-api-access-dwgc2\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.055071 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.055082 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.055091 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014134de-eb91-414a-a8a8-6ffe3cae5e72-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.549327 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lddf6" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.551298 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lddf6" event={"ID":"014134de-eb91-414a-a8a8-6ffe3cae5e72","Type":"ContainerDied","Data":"2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1"} Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.551407 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a54e3e5a345368303340419e377accb9587c98cdf85c5b5c19ba19cf44a0ca1" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.741921 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.741988 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.755013 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.768005 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.768259 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2deb7f44-da52-49a0-a43f-36341a898af4" containerName="nova-scheduler-scheduler" containerID="cri-o://dcac7202e44b135ec2115ad9bbc507ab1c47bf89ea8d6dfc412a7b825fbbf87a" gracePeriod=30 Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.806565 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.806897 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" containerID="cri-o://a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902" gracePeriod=30 Jan 31 15:18:27 crc kubenswrapper[4735]: I0131 15:18:27.807459 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" containerID="cri-o://5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265" gracePeriod=30 Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.559465 4735 generic.go:334] "Generic (PLEG): container finished" podID="1272c508-e793-42ce-b985-74c083b4b70c" containerID="a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902" exitCode=143 Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.559538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerDied","Data":"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902"} Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.560776 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-log" containerID="cri-o://b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869" gracePeriod=30 Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.561234 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-api" containerID="cri-o://7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e" gracePeriod=30 Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.566313 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": EOF" Jan 31 15:18:28 crc kubenswrapper[4735]: I0131 15:18:28.566320 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": EOF" Jan 31 15:18:29 crc kubenswrapper[4735]: I0131 15:18:29.573873 4735 generic.go:334] "Generic (PLEG): container finished" podID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerID="b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869" exitCode=143 Jan 31 15:18:29 crc kubenswrapper[4735]: I0131 15:18:29.573913 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerDied","Data":"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869"} Jan 31 15:18:30 crc kubenswrapper[4735]: I0131 15:18:30.962007 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:46714->10.217.0.195:8775: read: connection reset by peer" Jan 31 15:18:30 crc kubenswrapper[4735]: I0131 15:18:30.963406 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:46730->10.217.0.195:8775: read: connection reset by peer" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.477080 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.547317 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs\") pod \"1272c508-e793-42ce-b985-74c083b4b70c\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.547478 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle\") pod \"1272c508-e793-42ce-b985-74c083b4b70c\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.547779 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data\") pod \"1272c508-e793-42ce-b985-74c083b4b70c\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.547832 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs\") pod \"1272c508-e793-42ce-b985-74c083b4b70c\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.547906 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxdpg\" (UniqueName: \"kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg\") pod \"1272c508-e793-42ce-b985-74c083b4b70c\" (UID: \"1272c508-e793-42ce-b985-74c083b4b70c\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.551859 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs" (OuterVolumeSpecName: "logs") pod "1272c508-e793-42ce-b985-74c083b4b70c" (UID: "1272c508-e793-42ce-b985-74c083b4b70c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.583638 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg" (OuterVolumeSpecName: "kube-api-access-bxdpg") pod "1272c508-e793-42ce-b985-74c083b4b70c" (UID: "1272c508-e793-42ce-b985-74c083b4b70c"). InnerVolumeSpecName "kube-api-access-bxdpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.606143 4735 generic.go:334] "Generic (PLEG): container finished" podID="2deb7f44-da52-49a0-a43f-36341a898af4" containerID="dcac7202e44b135ec2115ad9bbc507ab1c47bf89ea8d6dfc412a7b825fbbf87a" exitCode=0 Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.606237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2deb7f44-da52-49a0-a43f-36341a898af4","Type":"ContainerDied","Data":"dcac7202e44b135ec2115ad9bbc507ab1c47bf89ea8d6dfc412a7b825fbbf87a"} Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.614699 4735 generic.go:334] "Generic (PLEG): container finished" podID="1272c508-e793-42ce-b985-74c083b4b70c" containerID="5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265" exitCode=0 Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.614745 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerDied","Data":"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265"} Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.614761 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.614774 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1272c508-e793-42ce-b985-74c083b4b70c","Type":"ContainerDied","Data":"ce6cd982a49e9bcfcf098eae1ebec00ef07d49d58970dd5450b5a62a46d2dfc9"} Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.614794 4735 scope.go:117] "RemoveContainer" containerID="5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.626675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1272c508-e793-42ce-b985-74c083b4b70c" (UID: "1272c508-e793-42ce-b985-74c083b4b70c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.626736 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data" (OuterVolumeSpecName: "config-data") pod "1272c508-e793-42ce-b985-74c083b4b70c" (UID: "1272c508-e793-42ce-b985-74c083b4b70c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.644278 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1272c508-e793-42ce-b985-74c083b4b70c" (UID: "1272c508-e793-42ce-b985-74c083b4b70c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.646497 4735 scope.go:117] "RemoveContainer" containerID="a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.651129 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.651156 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.651167 4735 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1272c508-e793-42ce-b985-74c083b4b70c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.651180 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxdpg\" (UniqueName: \"kubernetes.io/projected/1272c508-e793-42ce-b985-74c083b4b70c-kube-api-access-bxdpg\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.651189 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1272c508-e793-42ce-b985-74c083b4b70c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.668691 4735 scope.go:117] "RemoveContainer" containerID="5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.669289 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265\": container with ID starting with 5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265 not found: ID does not exist" containerID="5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.669319 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265"} err="failed to get container status \"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265\": rpc error: code = NotFound desc = could not find container \"5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265\": container with ID starting with 5e5cbae8df7548068d40184bd8207508c589ee82e3fa15e2c91d576ed01b5265 not found: ID does not exist" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.669345 4735 scope.go:117] "RemoveContainer" containerID="a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.669788 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902\": container with ID starting with a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902 not found: ID does not exist" containerID="a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.669819 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902"} err="failed to get container status \"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902\": rpc error: code = NotFound desc = could not find container \"a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902\": container with ID starting with a2f38343c4bb72633ff1bbf7522d44b2f02a22fc262f31dc31359c6dc6b82902 not found: ID does not exist" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.815084 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.947712 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.956190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle\") pod \"2deb7f44-da52-49a0-a43f-36341a898af4\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.956265 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssqq5\" (UniqueName: \"kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5\") pod \"2deb7f44-da52-49a0-a43f-36341a898af4\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.956417 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data\") pod \"2deb7f44-da52-49a0-a43f-36341a898af4\" (UID: \"2deb7f44-da52-49a0-a43f-36341a898af4\") " Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.962622 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5" (OuterVolumeSpecName: "kube-api-access-ssqq5") pod "2deb7f44-da52-49a0-a43f-36341a898af4" (UID: "2deb7f44-da52-49a0-a43f-36341a898af4"). InnerVolumeSpecName "kube-api-access-ssqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.963388 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.975978 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.976877 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.977048 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.977294 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014134de-eb91-414a-a8a8-6ffe3cae5e72" containerName="nova-manage" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.977448 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="014134de-eb91-414a-a8a8-6ffe3cae5e72" containerName="nova-manage" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.977586 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deb7f44-da52-49a0-a43f-36341a898af4" containerName="nova-scheduler-scheduler" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.977700 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deb7f44-da52-49a0-a43f-36341a898af4" containerName="nova-scheduler-scheduler" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.977823 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="dnsmasq-dns" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.977934 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="dnsmasq-dns" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.978064 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="init" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.978172 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="init" Jan 31 15:18:31 crc kubenswrapper[4735]: E0131 15:18:31.978924 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.979081 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.979550 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b397911-cf10-488e-af94-e0115c62b95b" containerName="dnsmasq-dns" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.979695 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-metadata" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.979835 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1272c508-e793-42ce-b985-74c083b4b70c" containerName="nova-metadata-log" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.979998 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deb7f44-da52-49a0-a43f-36341a898af4" containerName="nova-scheduler-scheduler" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.980213 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="014134de-eb91-414a-a8a8-6ffe3cae5e72" containerName="nova-manage" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.982553 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.983616 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.984396 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.984710 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 15:18:31 crc kubenswrapper[4735]: I0131 15:18:31.992172 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2deb7f44-da52-49a0-a43f-36341a898af4" (UID: "2deb7f44-da52-49a0-a43f-36341a898af4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.016277 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data" (OuterVolumeSpecName: "config-data") pod "2deb7f44-da52-49a0-a43f-36341a898af4" (UID: "2deb7f44-da52-49a0-a43f-36341a898af4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.058717 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-logs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.058804 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.058828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-config-data\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.058898 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qmb\" (UniqueName: \"kubernetes.io/projected/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-kube-api-access-v7qmb\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.059037 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.059201 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.059218 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2deb7f44-da52-49a0-a43f-36341a898af4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.059229 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssqq5\" (UniqueName: \"kubernetes.io/projected/2deb7f44-da52-49a0-a43f-36341a898af4-kube-api-access-ssqq5\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.160653 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.160807 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-logs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.160832 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.160856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-config-data\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.160881 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qmb\" (UniqueName: \"kubernetes.io/projected/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-kube-api-access-v7qmb\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.161610 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-logs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.164522 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.164945 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.166295 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-config-data\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.192545 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qmb\" (UniqueName: \"kubernetes.io/projected/df24097c-68e3-4bbc-b56b-cc19e5e91ea6-kube-api-access-v7qmb\") pod \"nova-metadata-0\" (UID: \"df24097c-68e3-4bbc-b56b-cc19e5e91ea6\") " pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.308533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.625251 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.625261 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2deb7f44-da52-49a0-a43f-36341a898af4","Type":"ContainerDied","Data":"ec7f037924ef3da5e6e34e39d46f399efce45d085825b4634ae95959014acf9f"} Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.625747 4735 scope.go:117] "RemoveContainer" containerID="dcac7202e44b135ec2115ad9bbc507ab1c47bf89ea8d6dfc412a7b825fbbf87a" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.664612 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.681674 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.694775 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.696056 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.697545 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.715955 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.767643 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.778333 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml94g\" (UniqueName: \"kubernetes.io/projected/622b97c5-deed-459f-a9a1-c407b424f921-kube-api-access-ml94g\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.778390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-config-data\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.778467 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.879617 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.879738 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml94g\" (UniqueName: \"kubernetes.io/projected/622b97c5-deed-459f-a9a1-c407b424f921-kube-api-access-ml94g\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.879769 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-config-data\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.886105 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-config-data\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.886860 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/622b97c5-deed-459f-a9a1-c407b424f921-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:32 crc kubenswrapper[4735]: I0131 15:18:32.895470 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml94g\" (UniqueName: \"kubernetes.io/projected/622b97c5-deed-459f-a9a1-c407b424f921-kube-api-access-ml94g\") pod \"nova-scheduler-0\" (UID: \"622b97c5-deed-459f-a9a1-c407b424f921\") " pod="openstack/nova-scheduler-0" Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.015274 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.481955 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 15:18:33 crc kubenswrapper[4735]: W0131 15:18:33.488670 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622b97c5_deed_459f_a9a1_c407b424f921.slice/crio-71b1978271c5891784ef460be9407ff6e5abea401c63a3745d8134e2a3185556 WatchSource:0}: Error finding container 71b1978271c5891784ef460be9407ff6e5abea401c63a3745d8134e2a3185556: Status 404 returned error can't find the container with id 71b1978271c5891784ef460be9407ff6e5abea401c63a3745d8134e2a3185556 Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.569213 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1272c508-e793-42ce-b985-74c083b4b70c" path="/var/lib/kubelet/pods/1272c508-e793-42ce-b985-74c083b4b70c/volumes" Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.571115 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2deb7f44-da52-49a0-a43f-36341a898af4" path="/var/lib/kubelet/pods/2deb7f44-da52-49a0-a43f-36341a898af4/volumes" Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.640144 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"622b97c5-deed-459f-a9a1-c407b424f921","Type":"ContainerStarted","Data":"71b1978271c5891784ef460be9407ff6e5abea401c63a3745d8134e2a3185556"} Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.645285 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df24097c-68e3-4bbc-b56b-cc19e5e91ea6","Type":"ContainerStarted","Data":"188902ba9be471d97545f863203086e0e72a558b1ec8b3b60a893a082a8aea76"} Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.645373 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df24097c-68e3-4bbc-b56b-cc19e5e91ea6","Type":"ContainerStarted","Data":"cc5648b967ce0f4b3744c76851c786247f82c4c991088c6a81afec4c68442374"} Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.645394 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"df24097c-68e3-4bbc-b56b-cc19e5e91ea6","Type":"ContainerStarted","Data":"838f53487f721d049a59cde19576646ac7761b4c50b69be15ba9849499d7845c"} Jan 31 15:18:33 crc kubenswrapper[4735]: I0131 15:18:33.674618 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.674597782 podStartE2EDuration="2.674597782s" podCreationTimestamp="2026-01-31 15:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:33.662821859 +0000 UTC m=+1199.436150921" watchObservedRunningTime="2026-01-31 15:18:33.674597782 +0000 UTC m=+1199.447926834" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.483619 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.527576 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.529613 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.529651 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpsfs\" (UniqueName: \"kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.529699 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.529741 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.529771 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle\") pod \"172ee382-e287-4a0b-bb53-0b7da0e66e77\" (UID: \"172ee382-e287-4a0b-bb53-0b7da0e66e77\") " Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.534061 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs" (OuterVolumeSpecName: "logs") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.543660 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs" (OuterVolumeSpecName: "kube-api-access-xpsfs") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "kube-api-access-xpsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.561410 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.564710 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data" (OuterVolumeSpecName: "config-data") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.588703 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.597002 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "172ee382-e287-4a0b-bb53-0b7da0e66e77" (UID: "172ee382-e287-4a0b-bb53-0b7da0e66e77"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633054 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/172ee382-e287-4a0b-bb53-0b7da0e66e77-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633101 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633113 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633122 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633131 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpsfs\" (UniqueName: \"kubernetes.io/projected/172ee382-e287-4a0b-bb53-0b7da0e66e77-kube-api-access-xpsfs\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.633140 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172ee382-e287-4a0b-bb53-0b7da0e66e77-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.655402 4735 generic.go:334] "Generic (PLEG): container finished" podID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerID="7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e" exitCode=0 Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.655482 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.655489 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerDied","Data":"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e"} Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.655630 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"172ee382-e287-4a0b-bb53-0b7da0e66e77","Type":"ContainerDied","Data":"37671c7384574015a9396acd05e987325a10416e0e3ded5eca6ecce6d93629c7"} Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.655649 4735 scope.go:117] "RemoveContainer" containerID="7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.657161 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"622b97c5-deed-459f-a9a1-c407b424f921","Type":"ContainerStarted","Data":"471a62babfae851245d21570bd26bad52e740eb74233d44dd5034411e593dc9c"} Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.687641 4735 scope.go:117] "RemoveContainer" containerID="b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.692923 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.692908963 podStartE2EDuration="2.692908963s" podCreationTimestamp="2026-01-31 15:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:34.690541196 +0000 UTC m=+1200.463870248" watchObservedRunningTime="2026-01-31 15:18:34.692908963 +0000 UTC m=+1200.466238005" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.710724 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.720670 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.743554 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.745552 4735 scope.go:117] "RemoveContainer" containerID="7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e" Jan 31 15:18:34 crc kubenswrapper[4735]: E0131 15:18:34.745923 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e\": container with ID starting with 7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e not found: ID does not exist" containerID="7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e" Jan 31 15:18:34 crc kubenswrapper[4735]: E0131 15:18:34.745958 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-log" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.745991 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-log" Jan 31 15:18:34 crc kubenswrapper[4735]: E0131 15:18:34.746014 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-api" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.746033 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-api" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.745957 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e"} err="failed to get container status \"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e\": rpc error: code = NotFound desc = could not find container \"7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e\": container with ID starting with 7673e0a35fbc4226606bdf6ef4efab91f5b5aae76f62e6849e9b8a839eb6a46e not found: ID does not exist" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.746067 4735 scope.go:117] "RemoveContainer" containerID="b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.746225 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-log" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.746244 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" containerName="nova-api-api" Jan 31 15:18:34 crc kubenswrapper[4735]: E0131 15:18:34.746390 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869\": container with ID starting with b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869 not found: ID does not exist" containerID="b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.746408 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869"} err="failed to get container status \"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869\": rpc error: code = NotFound desc = could not find container \"b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869\": container with ID starting with b3bc2abc613b839ee5519d22a2606f8bf9ea161a7a8643c242cbc80d3347c869 not found: ID does not exist" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.747166 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.753063 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.753084 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.753736 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.769035 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837295 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837374 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-config-data\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837447 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837469 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb482d0-dc39-4b62-81e7-c680dc211c0b-logs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.837492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvcc\" (UniqueName: \"kubernetes.io/projected/7fb482d0-dc39-4b62-81e7-c680dc211c0b-kube-api-access-dqvcc\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.939636 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.940565 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.940705 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-config-data\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.940787 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.940898 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb482d0-dc39-4b62-81e7-c680dc211c0b-logs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.940998 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvcc\" (UniqueName: \"kubernetes.io/projected/7fb482d0-dc39-4b62-81e7-c680dc211c0b-kube-api-access-dqvcc\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.941364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fb482d0-dc39-4b62-81e7-c680dc211c0b-logs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.945181 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.946580 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.946937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-config-data\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.947884 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fb482d0-dc39-4b62-81e7-c680dc211c0b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:34 crc kubenswrapper[4735]: I0131 15:18:34.969176 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvcc\" (UniqueName: \"kubernetes.io/projected/7fb482d0-dc39-4b62-81e7-c680dc211c0b-kube-api-access-dqvcc\") pod \"nova-api-0\" (UID: \"7fb482d0-dc39-4b62-81e7-c680dc211c0b\") " pod="openstack/nova-api-0" Jan 31 15:18:35 crc kubenswrapper[4735]: I0131 15:18:35.077109 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 15:18:35 crc kubenswrapper[4735]: I0131 15:18:35.557976 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172ee382-e287-4a0b-bb53-0b7da0e66e77" path="/var/lib/kubelet/pods/172ee382-e287-4a0b-bb53-0b7da0e66e77/volumes" Jan 31 15:18:35 crc kubenswrapper[4735]: I0131 15:18:35.603915 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 15:18:35 crc kubenswrapper[4735]: I0131 15:18:35.672984 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb482d0-dc39-4b62-81e7-c680dc211c0b","Type":"ContainerStarted","Data":"4a7aedc0be1361ae23ec756142b706fde1d9705359973fb8839ff3c4a5911e1d"} Jan 31 15:18:36 crc kubenswrapper[4735]: I0131 15:18:36.686182 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb482d0-dc39-4b62-81e7-c680dc211c0b","Type":"ContainerStarted","Data":"329dd9ece771a4ce641e1aa72045885cb3da5358521a593191a05194e4edcc70"} Jan 31 15:18:36 crc kubenswrapper[4735]: I0131 15:18:36.686556 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7fb482d0-dc39-4b62-81e7-c680dc211c0b","Type":"ContainerStarted","Data":"02a6aba135e52fff30a9315e915d18377b92e0c90bd290aa951ec0bed773c7cd"} Jan 31 15:18:36 crc kubenswrapper[4735]: I0131 15:18:36.731705 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.731684956 podStartE2EDuration="2.731684956s" podCreationTimestamp="2026-01-31 15:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:18:36.720387107 +0000 UTC m=+1202.493716199" watchObservedRunningTime="2026-01-31 15:18:36.731684956 +0000 UTC m=+1202.505013998" Jan 31 15:18:37 crc kubenswrapper[4735]: I0131 15:18:37.309100 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:18:37 crc kubenswrapper[4735]: I0131 15:18:37.309662 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 15:18:38 crc kubenswrapper[4735]: I0131 15:18:38.016342 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 15:18:42 crc kubenswrapper[4735]: I0131 15:18:42.309406 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 15:18:42 crc kubenswrapper[4735]: I0131 15:18:42.310642 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 15:18:43 crc kubenswrapper[4735]: I0131 15:18:43.016388 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 15:18:43 crc kubenswrapper[4735]: I0131 15:18:43.070185 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 15:18:43 crc kubenswrapper[4735]: I0131 15:18:43.336766 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df24097c-68e3-4bbc-b56b-cc19e5e91ea6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:43 crc kubenswrapper[4735]: I0131 15:18:43.336770 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="df24097c-68e3-4bbc-b56b-cc19e5e91ea6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:43 crc kubenswrapper[4735]: I0131 15:18:43.801555 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 15:18:45 crc kubenswrapper[4735]: I0131 15:18:45.078025 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:18:45 crc kubenswrapper[4735]: I0131 15:18:45.078098 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 15:18:46 crc kubenswrapper[4735]: I0131 15:18:46.097624 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fb482d0-dc39-4b62-81e7-c680dc211c0b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:46 crc kubenswrapper[4735]: I0131 15:18:46.097655 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7fb482d0-dc39-4b62-81e7-c680dc211c0b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 15:18:48 crc kubenswrapper[4735]: I0131 15:18:48.742818 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 15:18:52 crc kubenswrapper[4735]: I0131 15:18:52.320089 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 15:18:52 crc kubenswrapper[4735]: I0131 15:18:52.321663 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 15:18:52 crc kubenswrapper[4735]: I0131 15:18:52.331033 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 15:18:52 crc kubenswrapper[4735]: I0131 15:18:52.862811 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.085404 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.085929 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.089653 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.100178 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.893418 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 15:18:55 crc kubenswrapper[4735]: I0131 15:18:55.899398 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 15:19:04 crc kubenswrapper[4735]: I0131 15:19:04.557598 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:05 crc kubenswrapper[4735]: I0131 15:19:05.511022 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:09 crc kubenswrapper[4735]: I0131 15:19:09.043383 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="rabbitmq" containerID="cri-o://5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9" gracePeriod=604796 Jan 31 15:19:09 crc kubenswrapper[4735]: I0131 15:19:09.341528 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="rabbitmq" containerID="cri-o://a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45" gracePeriod=604797 Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.708394 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.710761 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.716662 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.732889 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.734325 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818560 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818597 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818642 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818660 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818690 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818739 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818770 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdkkh\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818806 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818877 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.818927 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf\") pod \"2f06bd71-0d33-43d8-9a0c-586aca801173\" (UID: \"2f06bd71-0d33-43d8-9a0c-586aca801173\") " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819163 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819218 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snr9n\" (UniqueName: \"kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819268 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819284 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819329 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819384 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819407 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.819910 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.820329 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.820787 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.824549 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.826539 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh" (OuterVolumeSpecName: "kube-api-access-mdkkh") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "kube-api-access-mdkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.830401 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.836611 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info" (OuterVolumeSpecName: "pod-info") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.838747 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.850017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data" (OuterVolumeSpecName: "config-data") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.903912 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf" (OuterVolumeSpecName: "server-conf") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921371 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921461 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snr9n\" (UniqueName: \"kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921515 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921530 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921573 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921616 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921636 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921678 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdkkh\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-kube-api-access-mdkkh\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921689 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921710 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921719 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921727 4735 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921735 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921743 4735 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2f06bd71-0d33-43d8-9a0c-586aca801173-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921752 4735 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2f06bd71-0d33-43d8-9a0c-586aca801173-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921760 4735 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.921767 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f06bd71-0d33-43d8-9a0c-586aca801173-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.922252 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.923113 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.923352 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.923512 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.923678 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.923896 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.930616 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2f06bd71-0d33-43d8-9a0c-586aca801173" (UID: "2f06bd71-0d33-43d8-9a0c-586aca801173"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.942136 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snr9n\" (UniqueName: \"kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n\") pod \"dnsmasq-dns-79bd4cc8c9-dnrfg\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.960107 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:15 crc kubenswrapper[4735]: I0131 15:19:15.960182 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.024893 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.024955 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.024978 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025015 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025051 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025083 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025119 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025164 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025279 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025312 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px668\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668\") pod \"2aad2308-9cbb-48a2-99cc-7556caf884a5\" (UID: \"2aad2308-9cbb-48a2-99cc-7556caf884a5\") " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025704 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2f06bd71-0d33-43d8-9a0c-586aca801173-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025715 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.025914 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.026014 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.026263 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.031017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.032696 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668" (OuterVolumeSpecName: "kube-api-access-px668") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "kube-api-access-px668". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.034767 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.037325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info" (OuterVolumeSpecName: "pod-info") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.038697 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.043622 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.065822 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data" (OuterVolumeSpecName: "config-data") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.100871 4735 generic.go:334] "Generic (PLEG): container finished" podID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerID="a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45" exitCode=0 Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.100957 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerDied","Data":"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45"} Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.101012 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2aad2308-9cbb-48a2-99cc-7556caf884a5","Type":"ContainerDied","Data":"04a9a77dedbb83250b4ac68dcb5b315145fd54ee59f6df6052839fa46abadeaa"} Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.101029 4735 scope.go:117] "RemoveContainer" containerID="a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.101199 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.102915 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf" (OuterVolumeSpecName: "server-conf") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.106608 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerID="5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9" exitCode=0 Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.106636 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerDied","Data":"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9"} Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.106675 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2f06bd71-0d33-43d8-9a0c-586aca801173","Type":"ContainerDied","Data":"e84e4feff54b1ce4dbb3a5053e76ca683dc5808bcf2caa8fc8c0ea57f2f500b3"} Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.106741 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127926 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127955 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127964 4735 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2aad2308-9cbb-48a2-99cc-7556caf884a5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127973 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px668\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-kube-api-access-px668\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127981 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.127989 4735 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.128012 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.128022 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.128032 4735 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2aad2308-9cbb-48a2-99cc-7556caf884a5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.128040 4735 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2aad2308-9cbb-48a2-99cc-7556caf884a5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.147438 4735 scope.go:117] "RemoveContainer" containerID="2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.157978 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.178505 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2aad2308-9cbb-48a2-99cc-7556caf884a5" (UID: "2aad2308-9cbb-48a2-99cc-7556caf884a5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.178562 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190336 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.190738 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="setup-container" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190751 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="setup-container" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.190767 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="setup-container" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190773 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="setup-container" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.190789 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190794 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.190816 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190821 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.190991 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.191003 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" containerName="rabbitmq" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.192032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.192813 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.193454 4735 scope.go:117] "RemoveContainer" containerID="a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.193927 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45\": container with ID starting with a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45 not found: ID does not exist" containerID="a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.193951 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45"} err="failed to get container status \"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45\": rpc error: code = NotFound desc = could not find container \"a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45\": container with ID starting with a4b48f6471b6eba20d5217b526578cda1a2c44ee9eb80f6080fe54190f929b45 not found: ID does not exist" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.193970 4735 scope.go:117] "RemoveContainer" containerID="2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.194533 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6\": container with ID starting with 2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6 not found: ID does not exist" containerID="2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195069 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6"} err="failed to get container status \"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6\": rpc error: code = NotFound desc = could not find container \"2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6\": container with ID starting with 2ff73f7189ae6e185ced0a4a84af526e1bbe3c8ed59e29f0628de5cb677785e6 not found: ID does not exist" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195106 4735 scope.go:117] "RemoveContainer" containerID="5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195168 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195437 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195638 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-75jv5" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195642 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.195893 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.196024 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.204809 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.217574 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.232507 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2aad2308-9cbb-48a2-99cc-7556caf884a5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.232549 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.256235 4735 scope.go:117] "RemoveContainer" containerID="70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.279813 4735 scope.go:117] "RemoveContainer" containerID="5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.280354 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9\": container with ID starting with 5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9 not found: ID does not exist" containerID="5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.280395 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9"} err="failed to get container status \"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9\": rpc error: code = NotFound desc = could not find container \"5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9\": container with ID starting with 5ff3640913481b58f21250033b4b1035c41f4c2529427cfc5055a0ca99e231f9 not found: ID does not exist" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.280450 4735 scope.go:117] "RemoveContainer" containerID="70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0" Jan 31 15:19:16 crc kubenswrapper[4735]: E0131 15:19:16.280880 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0\": container with ID starting with 70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0 not found: ID does not exist" containerID="70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.280912 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0"} err="failed to get container status \"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0\": rpc error: code = NotFound desc = could not find container \"70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0\": container with ID starting with 70542676623fd56388b4bcc62bb3248047601c0c37a4f860c5a2e283fdebdeb0 not found: ID does not exist" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334319 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334347 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334378 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334410 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-config-data\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334437 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334453 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334493 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.334529 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmqh\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-kube-api-access-qrmqh\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436058 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-config-data\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436342 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436478 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436582 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436683 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436754 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmqh\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-kube-api-access-qrmqh\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436879 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.436973 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.437071 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.437153 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.437242 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.437581 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.437608 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.438108 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.439284 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-config-data\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.440386 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.441142 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.441221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.442098 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.445093 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.446938 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.482068 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.530176 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.538946 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.541958 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.542151 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmqh\" (UniqueName: \"kubernetes.io/projected/9569e461-f5f7-4a24-a8d9-7f67e8f46b04-kube-api-access-qrmqh\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.544754 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.545848 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.546063 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.547150 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wm87z" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.547268 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.547322 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.559358 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.567566 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.568377 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"9569e461-f5f7-4a24-a8d9-7f67e8f46b04\") " pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.579510 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653590 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653656 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653675 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsz8\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-kube-api-access-kqsz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653700 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43595acf-df41-4c13-8d02-35d62877fecc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653727 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43595acf-df41-4c13-8d02-35d62877fecc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653758 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653789 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653807 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653854 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653872 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.653892 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755321 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755362 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755385 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755484 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755560 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqsz8\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-kube-api-access-kqsz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755580 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43595acf-df41-4c13-8d02-35d62877fecc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43595acf-df41-4c13-8d02-35d62877fecc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755632 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755651 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755667 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.755890 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.756490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.756743 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.759334 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.759639 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.759882 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/43595acf-df41-4c13-8d02-35d62877fecc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.759975 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/43595acf-df41-4c13-8d02-35d62877fecc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.761097 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.764009 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.764030 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/43595acf-df41-4c13-8d02-35d62877fecc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.775846 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqsz8\" (UniqueName: \"kubernetes.io/projected/43595acf-df41-4c13-8d02-35d62877fecc-kube-api-access-kqsz8\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.798517 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"43595acf-df41-4c13-8d02-35d62877fecc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.828233 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 15:19:16 crc kubenswrapper[4735]: I0131 15:19:16.886961 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.122610 4735 generic.go:334] "Generic (PLEG): container finished" podID="11551de9-956d-48d6-a735-a5c796fd4957" containerID="8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332" exitCode=0 Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.122828 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" event={"ID":"11551de9-956d-48d6-a735-a5c796fd4957","Type":"ContainerDied","Data":"8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332"} Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.122927 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" event={"ID":"11551de9-956d-48d6-a735-a5c796fd4957","Type":"ContainerStarted","Data":"262ab99b75a1836ef1f2645725528dcd1fdf813e1dd5719e51bde561ae8c4d00"} Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.297616 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.414529 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.556069 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aad2308-9cbb-48a2-99cc-7556caf884a5" path="/var/lib/kubelet/pods/2aad2308-9cbb-48a2-99cc-7556caf884a5/volumes" Jan 31 15:19:17 crc kubenswrapper[4735]: I0131 15:19:17.557928 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f06bd71-0d33-43d8-9a0c-586aca801173" path="/var/lib/kubelet/pods/2f06bd71-0d33-43d8-9a0c-586aca801173/volumes" Jan 31 15:19:18 crc kubenswrapper[4735]: I0131 15:19:18.142437 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" event={"ID":"11551de9-956d-48d6-a735-a5c796fd4957","Type":"ContainerStarted","Data":"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e"} Jan 31 15:19:18 crc kubenswrapper[4735]: I0131 15:19:18.142947 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:18 crc kubenswrapper[4735]: I0131 15:19:18.146273 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9569e461-f5f7-4a24-a8d9-7f67e8f46b04","Type":"ContainerStarted","Data":"9344a588ed1d8bcd29f05d91b15fb5cc51bdbeadd3af0963ad22cd79f35344f0"} Jan 31 15:19:18 crc kubenswrapper[4735]: I0131 15:19:18.148474 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43595acf-df41-4c13-8d02-35d62877fecc","Type":"ContainerStarted","Data":"c0527752f8f331c1b3531c1773d50d7bed95f70c7e9fb9b0566b2732a487e890"} Jan 31 15:19:18 crc kubenswrapper[4735]: I0131 15:19:18.181915 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" podStartSLOduration=3.181887747 podStartE2EDuration="3.181887747s" podCreationTimestamp="2026-01-31 15:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:19:18.174059646 +0000 UTC m=+1243.947388758" watchObservedRunningTime="2026-01-31 15:19:18.181887747 +0000 UTC m=+1243.955216829" Jan 31 15:19:20 crc kubenswrapper[4735]: I0131 15:19:20.171869 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9569e461-f5f7-4a24-a8d9-7f67e8f46b04","Type":"ContainerStarted","Data":"d4340f4124610c2902b8b4dccfa176cd983ad3a002a54b64d26a443e1f73900a"} Jan 31 15:19:20 crc kubenswrapper[4735]: I0131 15:19:20.175366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43595acf-df41-4c13-8d02-35d62877fecc","Type":"ContainerStarted","Data":"e67502e1e23a4124ac8ca56baa2d58642a306e8dede82ec5e8d66b9ecaa75cef"} Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.045688 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.157748 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.158231 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="dnsmasq-dns" containerID="cri-o://120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b" gracePeriod=10 Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.329904 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-gkqk4"] Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.331739 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.342822 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-gkqk4"] Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.421856 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422177 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lkr\" (UniqueName: \"kubernetes.io/projected/d4578674-5cf7-4382-811e-fe1cef58fff2-kube-api-access-c7lkr\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422340 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422443 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-config\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422461 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422567 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.422669 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-svc\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.524939 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.524994 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-config\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.525011 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.525044 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.525077 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-svc\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.525112 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.525160 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lkr\" (UniqueName: \"kubernetes.io/projected/d4578674-5cf7-4382-811e-fe1cef58fff2-kube-api-access-c7lkr\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.526354 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.527116 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-config\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.527632 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.528370 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.529287 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-dns-svc\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.530699 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4578674-5cf7-4382-811e-fe1cef58fff2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.547314 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lkr\" (UniqueName: \"kubernetes.io/projected/d4578674-5cf7-4382-811e-fe1cef58fff2-kube-api-access-c7lkr\") pod \"dnsmasq-dns-55478c4467-gkqk4\" (UID: \"d4578674-5cf7-4382-811e-fe1cef58fff2\") " pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.634302 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.648604 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727269 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m75d6\" (UniqueName: \"kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727502 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727528 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727588 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727620 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.727687 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc\") pod \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\" (UID: \"7e03a033-a1e4-4008-93d8-02ade8bd23dd\") " Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.732539 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6" (OuterVolumeSpecName: "kube-api-access-m75d6") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "kube-api-access-m75d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.771612 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config" (OuterVolumeSpecName: "config") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.779962 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.787741 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.788815 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.790551 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7e03a033-a1e4-4008-93d8-02ade8bd23dd" (UID: "7e03a033-a1e4-4008-93d8-02ade8bd23dd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829378 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m75d6\" (UniqueName: \"kubernetes.io/projected/7e03a033-a1e4-4008-93d8-02ade8bd23dd-kube-api-access-m75d6\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829453 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829467 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829480 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829492 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:26 crc kubenswrapper[4735]: I0131 15:19:26.829503 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e03a033-a1e4-4008-93d8-02ade8bd23dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.195586 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-gkqk4"] Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.247661 4735 generic.go:334] "Generic (PLEG): container finished" podID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerID="120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b" exitCode=0 Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.248080 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" event={"ID":"7e03a033-a1e4-4008-93d8-02ade8bd23dd","Type":"ContainerDied","Data":"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b"} Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.248163 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.248192 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tft9k" event={"ID":"7e03a033-a1e4-4008-93d8-02ade8bd23dd","Type":"ContainerDied","Data":"93b8451875a17499c1d50e22463ec01db82e9a73b7d2f1447c1a5e9193ea4807"} Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.248213 4735 scope.go:117] "RemoveContainer" containerID="120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.250377 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" event={"ID":"d4578674-5cf7-4382-811e-fe1cef58fff2","Type":"ContainerStarted","Data":"1fc79c5350b4633997e9b3b1355055ef02d2985051dafe4e72ae624402d227b6"} Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.295311 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.302483 4735 scope.go:117] "RemoveContainer" containerID="37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.305575 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tft9k"] Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.405115 4735 scope.go:117] "RemoveContainer" containerID="120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b" Jan 31 15:19:27 crc kubenswrapper[4735]: E0131 15:19:27.405607 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b\": container with ID starting with 120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b not found: ID does not exist" containerID="120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.405643 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b"} err="failed to get container status \"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b\": rpc error: code = NotFound desc = could not find container \"120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b\": container with ID starting with 120702f4cdf28da7375b4803317315c81fd2ee83b6752217ae5106ef72ae008b not found: ID does not exist" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.405667 4735 scope.go:117] "RemoveContainer" containerID="37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98" Jan 31 15:19:27 crc kubenswrapper[4735]: E0131 15:19:27.406049 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98\": container with ID starting with 37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98 not found: ID does not exist" containerID="37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.406075 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98"} err="failed to get container status \"37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98\": rpc error: code = NotFound desc = could not find container \"37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98\": container with ID starting with 37687271631c23c1119bfa681ec2899621927c78e6cfdab85beb532f6d947b98 not found: ID does not exist" Jan 31 15:19:27 crc kubenswrapper[4735]: I0131 15:19:27.550600 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" path="/var/lib/kubelet/pods/7e03a033-a1e4-4008-93d8-02ade8bd23dd/volumes" Jan 31 15:19:28 crc kubenswrapper[4735]: I0131 15:19:28.291223 4735 generic.go:334] "Generic (PLEG): container finished" podID="d4578674-5cf7-4382-811e-fe1cef58fff2" containerID="6de63a84b6d0c84c962ea481f2e2681b620915fdfde77f5c29c84469e5c9b84d" exitCode=0 Jan 31 15:19:28 crc kubenswrapper[4735]: I0131 15:19:28.291601 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" event={"ID":"d4578674-5cf7-4382-811e-fe1cef58fff2","Type":"ContainerDied","Data":"6de63a84b6d0c84c962ea481f2e2681b620915fdfde77f5c29c84469e5c9b84d"} Jan 31 15:19:29 crc kubenswrapper[4735]: I0131 15:19:29.307850 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" event={"ID":"d4578674-5cf7-4382-811e-fe1cef58fff2","Type":"ContainerStarted","Data":"968c20e6fa06d3cc4271744ad7e54f029566c99c684b7ab38787994373b8129d"} Jan 31 15:19:29 crc kubenswrapper[4735]: I0131 15:19:29.308447 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:29 crc kubenswrapper[4735]: I0131 15:19:29.343743 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" podStartSLOduration=3.3437200000000002 podStartE2EDuration="3.34372s" podCreationTimestamp="2026-01-31 15:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:19:29.338465962 +0000 UTC m=+1255.111795004" watchObservedRunningTime="2026-01-31 15:19:29.34372 +0000 UTC m=+1255.117049062" Jan 31 15:19:36 crc kubenswrapper[4735]: I0131 15:19:36.650825 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-gkqk4" Jan 31 15:19:36 crc kubenswrapper[4735]: I0131 15:19:36.759815 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:36 crc kubenswrapper[4735]: I0131 15:19:36.760473 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="dnsmasq-dns" containerID="cri-o://33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e" gracePeriod=10 Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.253617 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391221 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391273 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391344 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391434 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snr9n\" (UniqueName: \"kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391460 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391510 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.391530 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config\") pod \"11551de9-956d-48d6-a735-a5c796fd4957\" (UID: \"11551de9-956d-48d6-a735-a5c796fd4957\") " Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.401730 4735 generic.go:334] "Generic (PLEG): container finished" podID="11551de9-956d-48d6-a735-a5c796fd4957" containerID="33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e" exitCode=0 Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.401765 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" event={"ID":"11551de9-956d-48d6-a735-a5c796fd4957","Type":"ContainerDied","Data":"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e"} Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.401791 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" event={"ID":"11551de9-956d-48d6-a735-a5c796fd4957","Type":"ContainerDied","Data":"262ab99b75a1836ef1f2645725528dcd1fdf813e1dd5719e51bde561ae8c4d00"} Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.401807 4735 scope.go:117] "RemoveContainer" containerID="33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.401923 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-dnrfg" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.410900 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n" (OuterVolumeSpecName: "kube-api-access-snr9n") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "kube-api-access-snr9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.444218 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.447727 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.455776 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.462135 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.470168 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config" (OuterVolumeSpecName: "config") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.472384 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11551de9-956d-48d6-a735-a5c796fd4957" (UID: "11551de9-956d-48d6-a735-a5c796fd4957"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493525 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493552 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493565 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493578 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493590 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493601 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snr9n\" (UniqueName: \"kubernetes.io/projected/11551de9-956d-48d6-a735-a5c796fd4957-kube-api-access-snr9n\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.493614 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/11551de9-956d-48d6-a735-a5c796fd4957-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.515054 4735 scope.go:117] "RemoveContainer" containerID="8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.534158 4735 scope.go:117] "RemoveContainer" containerID="33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e" Jan 31 15:19:37 crc kubenswrapper[4735]: E0131 15:19:37.534566 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e\": container with ID starting with 33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e not found: ID does not exist" containerID="33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.534604 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e"} err="failed to get container status \"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e\": rpc error: code = NotFound desc = could not find container \"33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e\": container with ID starting with 33d817d84d7145c3d5ebf9829bd8ae13de6f67187f779181f3d25e4df3270a7e not found: ID does not exist" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.534630 4735 scope.go:117] "RemoveContainer" containerID="8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332" Jan 31 15:19:37 crc kubenswrapper[4735]: E0131 15:19:37.534922 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332\": container with ID starting with 8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332 not found: ID does not exist" containerID="8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.534948 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332"} err="failed to get container status \"8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332\": rpc error: code = NotFound desc = could not find container \"8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332\": container with ID starting with 8b7ae7be4b0ca3b580a3d22e70fd76ddea287f9d2bd17007d6e7f2cc46b5b332 not found: ID does not exist" Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.734171 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:37 crc kubenswrapper[4735]: I0131 15:19:37.743528 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-dnrfg"] Jan 31 15:19:39 crc kubenswrapper[4735]: I0131 15:19:39.560042 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11551de9-956d-48d6-a735-a5c796fd4957" path="/var/lib/kubelet/pods/11551de9-956d-48d6-a735-a5c796fd4957/volumes" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.702556 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm"] Jan 31 15:19:49 crc kubenswrapper[4735]: E0131 15:19:49.703618 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="init" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703637 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="init" Jan 31 15:19:49 crc kubenswrapper[4735]: E0131 15:19:49.703668 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="init" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703676 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="init" Jan 31 15:19:49 crc kubenswrapper[4735]: E0131 15:19:49.703691 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703698 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: E0131 15:19:49.703710 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703718 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703897 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="11551de9-956d-48d6-a735-a5c796fd4957" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.703913 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e03a033-a1e4-4008-93d8-02ade8bd23dd" containerName="dnsmasq-dns" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.704549 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.707676 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.707796 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.707679 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.708005 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.726694 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm"] Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.784554 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.784662 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.784751 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.784869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffsq\" (UniqueName: \"kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.886607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.886667 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.886743 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.886810 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffsq\" (UniqueName: \"kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.892304 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.892843 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.900442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:49 crc kubenswrapper[4735]: I0131 15:19:49.904448 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffsq\" (UniqueName: \"kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:50 crc kubenswrapper[4735]: I0131 15:19:50.036941 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:19:50 crc kubenswrapper[4735]: I0131 15:19:50.616566 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm"] Jan 31 15:19:50 crc kubenswrapper[4735]: W0131 15:19:50.616742 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdb9dbd_b178_43f6_985a_1b19f40820cd.slice/crio-e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3 WatchSource:0}: Error finding container e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3: Status 404 returned error can't find the container with id e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3 Jan 31 15:19:51 crc kubenswrapper[4735]: I0131 15:19:51.560450 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" event={"ID":"2bdb9dbd-b178-43f6-985a-1b19f40820cd","Type":"ContainerStarted","Data":"e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3"} Jan 31 15:19:52 crc kubenswrapper[4735]: I0131 15:19:52.574999 4735 generic.go:334] "Generic (PLEG): container finished" podID="43595acf-df41-4c13-8d02-35d62877fecc" containerID="e67502e1e23a4124ac8ca56baa2d58642a306e8dede82ec5e8d66b9ecaa75cef" exitCode=0 Jan 31 15:19:52 crc kubenswrapper[4735]: I0131 15:19:52.575055 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43595acf-df41-4c13-8d02-35d62877fecc","Type":"ContainerDied","Data":"e67502e1e23a4124ac8ca56baa2d58642a306e8dede82ec5e8d66b9ecaa75cef"} Jan 31 15:19:53 crc kubenswrapper[4735]: I0131 15:19:53.585179 4735 generic.go:334] "Generic (PLEG): container finished" podID="9569e461-f5f7-4a24-a8d9-7f67e8f46b04" containerID="d4340f4124610c2902b8b4dccfa176cd983ad3a002a54b64d26a443e1f73900a" exitCode=0 Jan 31 15:19:53 crc kubenswrapper[4735]: I0131 15:19:53.585542 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9569e461-f5f7-4a24-a8d9-7f67e8f46b04","Type":"ContainerDied","Data":"d4340f4124610c2902b8b4dccfa176cd983ad3a002a54b64d26a443e1f73900a"} Jan 31 15:19:53 crc kubenswrapper[4735]: I0131 15:19:53.593709 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"43595acf-df41-4c13-8d02-35d62877fecc","Type":"ContainerStarted","Data":"41f7d6d7ace68b6680af11b256a0615b02090f954873730167b0215cf33478cb"} Jan 31 15:19:53 crc kubenswrapper[4735]: I0131 15:19:53.594035 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:19:53 crc kubenswrapper[4735]: I0131 15:19:53.647676 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.647656615 podStartE2EDuration="37.647656615s" podCreationTimestamp="2026-01-31 15:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:19:53.634963357 +0000 UTC m=+1279.408292419" watchObservedRunningTime="2026-01-31 15:19:53.647656615 +0000 UTC m=+1279.420985647" Jan 31 15:19:59 crc kubenswrapper[4735]: I0131 15:19:59.668390 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" event={"ID":"2bdb9dbd-b178-43f6-985a-1b19f40820cd","Type":"ContainerStarted","Data":"19f5b61b9ac7f4723414587bffbaa1f1d11f5c8593c195be9a54f833cd892b38"} Jan 31 15:19:59 crc kubenswrapper[4735]: I0131 15:19:59.674672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9569e461-f5f7-4a24-a8d9-7f67e8f46b04","Type":"ContainerStarted","Data":"68aafe77cef72bdccce67a85951c4d077d5bdecb2186c3e2cbcf2fc83b338413"} Jan 31 15:19:59 crc kubenswrapper[4735]: I0131 15:19:59.675336 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 15:19:59 crc kubenswrapper[4735]: I0131 15:19:59.697027 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" podStartSLOduration=2.37158273 podStartE2EDuration="10.69699871s" podCreationTimestamp="2026-01-31 15:19:49 +0000 UTC" firstStartedPulling="2026-01-31 15:19:50.619500954 +0000 UTC m=+1276.392830036" lastFinishedPulling="2026-01-31 15:19:58.944916974 +0000 UTC m=+1284.718246016" observedRunningTime="2026-01-31 15:19:59.684632691 +0000 UTC m=+1285.457961753" watchObservedRunningTime="2026-01-31 15:19:59.69699871 +0000 UTC m=+1285.470327752" Jan 31 15:19:59 crc kubenswrapper[4735]: I0131 15:19:59.711783 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.711766787 podStartE2EDuration="43.711766787s" podCreationTimestamp="2026-01-31 15:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:19:59.707032423 +0000 UTC m=+1285.480361475" watchObservedRunningTime="2026-01-31 15:19:59.711766787 +0000 UTC m=+1285.485095829" Jan 31 15:20:06 crc kubenswrapper[4735]: I0131 15:20:06.891760 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 15:20:07 crc kubenswrapper[4735]: I0131 15:20:07.346390 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:20:07 crc kubenswrapper[4735]: I0131 15:20:07.346751 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:20:09 crc kubenswrapper[4735]: I0131 15:20:09.829115 4735 generic.go:334] "Generic (PLEG): container finished" podID="2bdb9dbd-b178-43f6-985a-1b19f40820cd" containerID="19f5b61b9ac7f4723414587bffbaa1f1d11f5c8593c195be9a54f833cd892b38" exitCode=0 Jan 31 15:20:09 crc kubenswrapper[4735]: I0131 15:20:09.829172 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" event={"ID":"2bdb9dbd-b178-43f6-985a-1b19f40820cd","Type":"ContainerDied","Data":"19f5b61b9ac7f4723414587bffbaa1f1d11f5c8593c195be9a54f833cd892b38"} Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.342009 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.455198 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffsq\" (UniqueName: \"kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq\") pod \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.455266 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory\") pod \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.455314 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam\") pod \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.455385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle\") pod \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\" (UID: \"2bdb9dbd-b178-43f6-985a-1b19f40820cd\") " Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.461352 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq" (OuterVolumeSpecName: "kube-api-access-wffsq") pod "2bdb9dbd-b178-43f6-985a-1b19f40820cd" (UID: "2bdb9dbd-b178-43f6-985a-1b19f40820cd"). InnerVolumeSpecName "kube-api-access-wffsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.469644 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2bdb9dbd-b178-43f6-985a-1b19f40820cd" (UID: "2bdb9dbd-b178-43f6-985a-1b19f40820cd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.482084 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2bdb9dbd-b178-43f6-985a-1b19f40820cd" (UID: "2bdb9dbd-b178-43f6-985a-1b19f40820cd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.498885 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory" (OuterVolumeSpecName: "inventory") pod "2bdb9dbd-b178-43f6-985a-1b19f40820cd" (UID: "2bdb9dbd-b178-43f6-985a-1b19f40820cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.557801 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.557832 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.557844 4735 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdb9dbd-b178-43f6-985a-1b19f40820cd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.557853 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wffsq\" (UniqueName: \"kubernetes.io/projected/2bdb9dbd-b178-43f6-985a-1b19f40820cd-kube-api-access-wffsq\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.852918 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" event={"ID":"2bdb9dbd-b178-43f6-985a-1b19f40820cd","Type":"ContainerDied","Data":"e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3"} Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.852979 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e79cad2a0ab4572ac10920e4b33f507a1f294fb0692caa877237cac2f3f231c3" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.853035 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.946903 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4"] Jan 31 15:20:11 crc kubenswrapper[4735]: E0131 15:20:11.947475 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bdb9dbd-b178-43f6-985a-1b19f40820cd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.947503 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bdb9dbd-b178-43f6-985a-1b19f40820cd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.947792 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bdb9dbd-b178-43f6-985a-1b19f40820cd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.948717 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.952264 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.952524 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.952678 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.952949 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:20:11 crc kubenswrapper[4735]: I0131 15:20:11.955891 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4"] Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.068885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zlf\" (UniqueName: \"kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.069210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.069305 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.171033 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.171530 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8zlf\" (UniqueName: \"kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.171640 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.177382 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.180072 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.189868 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8zlf\" (UniqueName: \"kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9thw4\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.281672 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.664707 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4"] Jan 31 15:20:12 crc kubenswrapper[4735]: W0131 15:20:12.667757 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6468a09d_c7d4_428a_bfa1_50c28830f709.slice/crio-87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c WatchSource:0}: Error finding container 87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c: Status 404 returned error can't find the container with id 87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c Jan 31 15:20:12 crc kubenswrapper[4735]: I0131 15:20:12.868344 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" event={"ID":"6468a09d-c7d4-428a-bfa1-50c28830f709","Type":"ContainerStarted","Data":"87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c"} Jan 31 15:20:13 crc kubenswrapper[4735]: I0131 15:20:13.877759 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" event={"ID":"6468a09d-c7d4-428a-bfa1-50c28830f709","Type":"ContainerStarted","Data":"e97aa50d580a37e1bc284627acf55030a25e396c1ca8c4bd2812d9c7e2ac0063"} Jan 31 15:20:13 crc kubenswrapper[4735]: I0131 15:20:13.897411 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" podStartSLOduration=2.313681411 podStartE2EDuration="2.897382617s" podCreationTimestamp="2026-01-31 15:20:11 +0000 UTC" firstStartedPulling="2026-01-31 15:20:12.670501787 +0000 UTC m=+1298.443830839" lastFinishedPulling="2026-01-31 15:20:13.254202953 +0000 UTC m=+1299.027532045" observedRunningTime="2026-01-31 15:20:13.890005859 +0000 UTC m=+1299.663334901" watchObservedRunningTime="2026-01-31 15:20:13.897382617 +0000 UTC m=+1299.670711679" Jan 31 15:20:16 crc kubenswrapper[4735]: I0131 15:20:16.834677 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 15:20:16 crc kubenswrapper[4735]: I0131 15:20:16.928100 4735 generic.go:334] "Generic (PLEG): container finished" podID="6468a09d-c7d4-428a-bfa1-50c28830f709" containerID="e97aa50d580a37e1bc284627acf55030a25e396c1ca8c4bd2812d9c7e2ac0063" exitCode=0 Jan 31 15:20:16 crc kubenswrapper[4735]: I0131 15:20:16.928157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" event={"ID":"6468a09d-c7d4-428a-bfa1-50c28830f709","Type":"ContainerDied","Data":"e97aa50d580a37e1bc284627acf55030a25e396c1ca8c4bd2812d9c7e2ac0063"} Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.428738 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.510575 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory\") pod \"6468a09d-c7d4-428a-bfa1-50c28830f709\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.510653 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8zlf\" (UniqueName: \"kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf\") pod \"6468a09d-c7d4-428a-bfa1-50c28830f709\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.510732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam\") pod \"6468a09d-c7d4-428a-bfa1-50c28830f709\" (UID: \"6468a09d-c7d4-428a-bfa1-50c28830f709\") " Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.517035 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf" (OuterVolumeSpecName: "kube-api-access-n8zlf") pod "6468a09d-c7d4-428a-bfa1-50c28830f709" (UID: "6468a09d-c7d4-428a-bfa1-50c28830f709"). InnerVolumeSpecName "kube-api-access-n8zlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.548318 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6468a09d-c7d4-428a-bfa1-50c28830f709" (UID: "6468a09d-c7d4-428a-bfa1-50c28830f709"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.581821 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory" (OuterVolumeSpecName: "inventory") pod "6468a09d-c7d4-428a-bfa1-50c28830f709" (UID: "6468a09d-c7d4-428a-bfa1-50c28830f709"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.613538 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8zlf\" (UniqueName: \"kubernetes.io/projected/6468a09d-c7d4-428a-bfa1-50c28830f709-kube-api-access-n8zlf\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.613587 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.613609 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6468a09d-c7d4-428a-bfa1-50c28830f709-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.951617 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" event={"ID":"6468a09d-c7d4-428a-bfa1-50c28830f709","Type":"ContainerDied","Data":"87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c"} Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.951665 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87772ac2ed52d6f17fe39b035fd8df1a5ab3cd5068761c386c54acadbb4c7f2c" Jan 31 15:20:18 crc kubenswrapper[4735]: I0131 15:20:18.951746 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9thw4" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.053463 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7"] Jan 31 15:20:19 crc kubenswrapper[4735]: E0131 15:20:19.053989 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6468a09d-c7d4-428a-bfa1-50c28830f709" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.054008 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="6468a09d-c7d4-428a-bfa1-50c28830f709" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.054193 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="6468a09d-c7d4-428a-bfa1-50c28830f709" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.054955 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.058774 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.058886 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.059025 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.059233 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.088275 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7"] Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.125712 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.125827 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.125974 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.126123 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxdn9\" (UniqueName: \"kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.228439 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.228574 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxdn9\" (UniqueName: \"kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.228800 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.228851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.233925 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.236378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.236919 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.247935 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxdn9\" (UniqueName: \"kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.386463 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:20:19 crc kubenswrapper[4735]: I0131 15:20:19.971738 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7"] Jan 31 15:20:20 crc kubenswrapper[4735]: I0131 15:20:20.981215 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" event={"ID":"8b16424b-3400-4f1a-931f-f0a2a398859c","Type":"ContainerStarted","Data":"4f4057a8662425b0c81e96c0fd9645a647ce55c2696b6ee55667a365a2c03a27"} Jan 31 15:20:20 crc kubenswrapper[4735]: I0131 15:20:20.982312 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" event={"ID":"8b16424b-3400-4f1a-931f-f0a2a398859c","Type":"ContainerStarted","Data":"65da3120524eccc7889e07d44fa8f36edd42282c8e4a546f070ea3e297c34dc0"} Jan 31 15:20:21 crc kubenswrapper[4735]: I0131 15:20:21.019656 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" podStartSLOduration=1.5743512179999999 podStartE2EDuration="2.019634189s" podCreationTimestamp="2026-01-31 15:20:19 +0000 UTC" firstStartedPulling="2026-01-31 15:20:19.977275655 +0000 UTC m=+1305.750604697" lastFinishedPulling="2026-01-31 15:20:20.422558586 +0000 UTC m=+1306.195887668" observedRunningTime="2026-01-31 15:20:21.013718342 +0000 UTC m=+1306.787047424" watchObservedRunningTime="2026-01-31 15:20:21.019634189 +0000 UTC m=+1306.792963231" Jan 31 15:20:37 crc kubenswrapper[4735]: I0131 15:20:37.346242 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:20:37 crc kubenswrapper[4735]: I0131 15:20:37.346896 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:20:38 crc kubenswrapper[4735]: I0131 15:20:38.879081 4735 scope.go:117] "RemoveContainer" containerID="d0b6cf35090bc799300fab5aaa8952738e3d3bd78fe05a1369976c3a400378f8" Jan 31 15:20:38 crc kubenswrapper[4735]: I0131 15:20:38.901553 4735 scope.go:117] "RemoveContainer" containerID="718bcd90643fb57832dc634a3474c2a388fad427619adc8da4a29634472564b2" Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.346210 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.347075 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.347146 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.348343 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.348499 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53" gracePeriod=600 Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.532998 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53" exitCode=0 Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.533066 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53"} Jan 31 15:21:07 crc kubenswrapper[4735]: I0131 15:21:07.533152 4735 scope.go:117] "RemoveContainer" containerID="615aef1ea74a37b96d7f92cebf1bff71c6062df4d66a76c35cc268218af8055c" Jan 31 15:21:08 crc kubenswrapper[4735]: I0131 15:21:08.545344 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0"} Jan 31 15:21:38 crc kubenswrapper[4735]: I0131 15:21:38.990586 4735 scope.go:117] "RemoveContainer" containerID="972e531c10b2ff3ca68dc07df81cbca1f749b88e488aafb01f799a96591be545" Jan 31 15:23:07 crc kubenswrapper[4735]: I0131 15:23:07.346144 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:23:07 crc kubenswrapper[4735]: I0131 15:23:07.346693 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:23:07 crc kubenswrapper[4735]: I0131 15:23:07.896521 4735 generic.go:334] "Generic (PLEG): container finished" podID="8b16424b-3400-4f1a-931f-f0a2a398859c" containerID="4f4057a8662425b0c81e96c0fd9645a647ce55c2696b6ee55667a365a2c03a27" exitCode=0 Jan 31 15:23:07 crc kubenswrapper[4735]: I0131 15:23:07.896571 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" event={"ID":"8b16424b-3400-4f1a-931f-f0a2a398859c","Type":"ContainerDied","Data":"4f4057a8662425b0c81e96c0fd9645a647ce55c2696b6ee55667a365a2c03a27"} Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.394863 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.458129 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxdn9\" (UniqueName: \"kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9\") pod \"8b16424b-3400-4f1a-931f-f0a2a398859c\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.458224 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory\") pod \"8b16424b-3400-4f1a-931f-f0a2a398859c\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.458359 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle\") pod \"8b16424b-3400-4f1a-931f-f0a2a398859c\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.458476 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam\") pod \"8b16424b-3400-4f1a-931f-f0a2a398859c\" (UID: \"8b16424b-3400-4f1a-931f-f0a2a398859c\") " Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.470103 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9" (OuterVolumeSpecName: "kube-api-access-bxdn9") pod "8b16424b-3400-4f1a-931f-f0a2a398859c" (UID: "8b16424b-3400-4f1a-931f-f0a2a398859c"). InnerVolumeSpecName "kube-api-access-bxdn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.470128 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8b16424b-3400-4f1a-931f-f0a2a398859c" (UID: "8b16424b-3400-4f1a-931f-f0a2a398859c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.491953 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory" (OuterVolumeSpecName: "inventory") pod "8b16424b-3400-4f1a-931f-f0a2a398859c" (UID: "8b16424b-3400-4f1a-931f-f0a2a398859c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.496155 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b16424b-3400-4f1a-931f-f0a2a398859c" (UID: "8b16424b-3400-4f1a-931f-f0a2a398859c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.560663 4735 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.560705 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.560716 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxdn9\" (UniqueName: \"kubernetes.io/projected/8b16424b-3400-4f1a-931f-f0a2a398859c-kube-api-access-bxdn9\") on node \"crc\" DevicePath \"\"" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.560727 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b16424b-3400-4f1a-931f-f0a2a398859c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.919968 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" event={"ID":"8b16424b-3400-4f1a-931f-f0a2a398859c","Type":"ContainerDied","Data":"65da3120524eccc7889e07d44fa8f36edd42282c8e4a546f070ea3e297c34dc0"} Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.920279 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65da3120524eccc7889e07d44fa8f36edd42282c8e4a546f070ea3e297c34dc0" Jan 31 15:23:09 crc kubenswrapper[4735]: I0131 15:23:09.920109 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.028609 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9"] Jan 31 15:23:10 crc kubenswrapper[4735]: E0131 15:23:10.029042 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b16424b-3400-4f1a-931f-f0a2a398859c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.029063 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b16424b-3400-4f1a-931f-f0a2a398859c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.029242 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b16424b-3400-4f1a-931f-f0a2a398859c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.029893 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.033658 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.033890 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.034103 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.040890 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.058238 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9"] Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.092517 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkfk\" (UniqueName: \"kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.092610 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.092665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.194726 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgkfk\" (UniqueName: \"kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.194808 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.194849 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.213474 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.216935 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.217175 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgkfk\" (UniqueName: \"kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n87t9\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.353776 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:23:10 crc kubenswrapper[4735]: I0131 15:23:10.920667 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9"] Jan 31 15:23:11 crc kubenswrapper[4735]: I0131 15:23:11.948672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" event={"ID":"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83","Type":"ContainerStarted","Data":"aff11f0d581f8d18218bd2b0b65d1876a082f7bf09434bcc3d4cc3c1b0bece52"} Jan 31 15:23:11 crc kubenswrapper[4735]: I0131 15:23:11.949027 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" event={"ID":"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83","Type":"ContainerStarted","Data":"6723b0efd157599d76e8a32ec776a8c8123b870e78cf2b2efe1434054935f0ec"} Jan 31 15:23:11 crc kubenswrapper[4735]: I0131 15:23:11.975338 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" podStartSLOduration=1.575048397 podStartE2EDuration="1.975308863s" podCreationTimestamp="2026-01-31 15:23:10 +0000 UTC" firstStartedPulling="2026-01-31 15:23:10.932134731 +0000 UTC m=+1476.705463813" lastFinishedPulling="2026-01-31 15:23:11.332395207 +0000 UTC m=+1477.105724279" observedRunningTime="2026-01-31 15:23:11.969946282 +0000 UTC m=+1477.743275384" watchObservedRunningTime="2026-01-31 15:23:11.975308863 +0000 UTC m=+1477.748637945" Jan 31 15:23:37 crc kubenswrapper[4735]: I0131 15:23:37.345945 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:23:37 crc kubenswrapper[4735]: I0131 15:23:37.346633 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.091072 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.093649 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.099060 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.109926 4735 scope.go:117] "RemoveContainer" containerID="c5bfc53fb7b8f0ee0d443dfea905b1af52ad046d79db9fa4f9144e106f6cc258" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.212815 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfgh\" (UniqueName: \"kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.212918 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.212947 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.314583 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfgh\" (UniqueName: \"kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.314700 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.314728 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.315347 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.315358 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.335990 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfgh\" (UniqueName: \"kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh\") pod \"redhat-operators-rlk5c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.424873 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:39 crc kubenswrapper[4735]: I0131 15:23:39.928087 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:23:40 crc kubenswrapper[4735]: I0131 15:23:40.237022 4735 generic.go:334] "Generic (PLEG): container finished" podID="5657c5db-91a2-4254-a35c-736faa70763c" containerID="5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9" exitCode=0 Jan 31 15:23:40 crc kubenswrapper[4735]: I0131 15:23:40.237064 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerDied","Data":"5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9"} Jan 31 15:23:40 crc kubenswrapper[4735]: I0131 15:23:40.237321 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerStarted","Data":"9675b8534c695f0e203ed6867cfaca8d695672850d26cf2521bf487817554a74"} Jan 31 15:23:40 crc kubenswrapper[4735]: I0131 15:23:40.238990 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:23:41 crc kubenswrapper[4735]: I0131 15:23:41.251181 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerStarted","Data":"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e"} Jan 31 15:23:43 crc kubenswrapper[4735]: I0131 15:23:43.273878 4735 generic.go:334] "Generic (PLEG): container finished" podID="5657c5db-91a2-4254-a35c-736faa70763c" containerID="2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e" exitCode=0 Jan 31 15:23:43 crc kubenswrapper[4735]: I0131 15:23:43.274066 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerDied","Data":"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e"} Jan 31 15:23:45 crc kubenswrapper[4735]: I0131 15:23:45.296845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerStarted","Data":"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59"} Jan 31 15:23:45 crc kubenswrapper[4735]: I0131 15:23:45.319630 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rlk5c" podStartSLOduration=2.162165874 podStartE2EDuration="6.319613001s" podCreationTimestamp="2026-01-31 15:23:39 +0000 UTC" firstStartedPulling="2026-01-31 15:23:40.238717018 +0000 UTC m=+1506.012046060" lastFinishedPulling="2026-01-31 15:23:44.396164145 +0000 UTC m=+1510.169493187" observedRunningTime="2026-01-31 15:23:45.317307818 +0000 UTC m=+1511.090636870" watchObservedRunningTime="2026-01-31 15:23:45.319613001 +0000 UTC m=+1511.092942043" Jan 31 15:23:49 crc kubenswrapper[4735]: I0131 15:23:49.425973 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:49 crc kubenswrapper[4735]: I0131 15:23:49.426744 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:50 crc kubenswrapper[4735]: I0131 15:23:50.474306 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rlk5c" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="registry-server" probeResult="failure" output=< Jan 31 15:23:50 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:23:50 crc kubenswrapper[4735]: > Jan 31 15:23:59 crc kubenswrapper[4735]: I0131 15:23:59.496302 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:59 crc kubenswrapper[4735]: I0131 15:23:59.589395 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:23:59 crc kubenswrapper[4735]: I0131 15:23:59.742523 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:24:01 crc kubenswrapper[4735]: I0131 15:24:01.454410 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rlk5c" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="registry-server" containerID="cri-o://6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59" gracePeriod=2 Jan 31 15:24:01 crc kubenswrapper[4735]: I0131 15:24:01.942067 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.088676 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities\") pod \"5657c5db-91a2-4254-a35c-736faa70763c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.088984 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qfgh\" (UniqueName: \"kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh\") pod \"5657c5db-91a2-4254-a35c-736faa70763c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.089232 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content\") pod \"5657c5db-91a2-4254-a35c-736faa70763c\" (UID: \"5657c5db-91a2-4254-a35c-736faa70763c\") " Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.095214 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh" (OuterVolumeSpecName: "kube-api-access-6qfgh") pod "5657c5db-91a2-4254-a35c-736faa70763c" (UID: "5657c5db-91a2-4254-a35c-736faa70763c"). InnerVolumeSpecName "kube-api-access-6qfgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.105031 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities" (OuterVolumeSpecName: "utilities") pod "5657c5db-91a2-4254-a35c-736faa70763c" (UID: "5657c5db-91a2-4254-a35c-736faa70763c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.191250 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.191279 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qfgh\" (UniqueName: \"kubernetes.io/projected/5657c5db-91a2-4254-a35c-736faa70763c-kube-api-access-6qfgh\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.241637 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5657c5db-91a2-4254-a35c-736faa70763c" (UID: "5657c5db-91a2-4254-a35c-736faa70763c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.294360 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5657c5db-91a2-4254-a35c-736faa70763c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.467158 4735 generic.go:334] "Generic (PLEG): container finished" podID="5657c5db-91a2-4254-a35c-736faa70763c" containerID="6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59" exitCode=0 Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.467231 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerDied","Data":"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59"} Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.467245 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlk5c" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.467285 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlk5c" event={"ID":"5657c5db-91a2-4254-a35c-736faa70763c","Type":"ContainerDied","Data":"9675b8534c695f0e203ed6867cfaca8d695672850d26cf2521bf487817554a74"} Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.467319 4735 scope.go:117] "RemoveContainer" containerID="6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.503403 4735 scope.go:117] "RemoveContainer" containerID="2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.520675 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.545130 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rlk5c"] Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.561004 4735 scope.go:117] "RemoveContainer" containerID="5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.615557 4735 scope.go:117] "RemoveContainer" containerID="6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59" Jan 31 15:24:02 crc kubenswrapper[4735]: E0131 15:24:02.616735 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59\": container with ID starting with 6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59 not found: ID does not exist" containerID="6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.616815 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59"} err="failed to get container status \"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59\": rpc error: code = NotFound desc = could not find container \"6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59\": container with ID starting with 6d2b192112fa2f0cf1daf3d9dbe634de6f9bc59fee269e607e428c7ca0462c59 not found: ID does not exist" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.616854 4735 scope.go:117] "RemoveContainer" containerID="2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e" Jan 31 15:24:02 crc kubenswrapper[4735]: E0131 15:24:02.617635 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e\": container with ID starting with 2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e not found: ID does not exist" containerID="2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.617670 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e"} err="failed to get container status \"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e\": rpc error: code = NotFound desc = could not find container \"2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e\": container with ID starting with 2f9a68331e911a12cd846e514bdfda070648edc8b482ad1c07439c57ea15387e not found: ID does not exist" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.617694 4735 scope.go:117] "RemoveContainer" containerID="5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9" Jan 31 15:24:02 crc kubenswrapper[4735]: E0131 15:24:02.618793 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9\": container with ID starting with 5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9 not found: ID does not exist" containerID="5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9" Jan 31 15:24:02 crc kubenswrapper[4735]: I0131 15:24:02.618851 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9"} err="failed to get container status \"5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9\": rpc error: code = NotFound desc = could not find container \"5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9\": container with ID starting with 5d91f9bf70672328442b1f182f70caa65686d4f4f19719fe3b19089267b471e9 not found: ID does not exist" Jan 31 15:24:03 crc kubenswrapper[4735]: I0131 15:24:03.554016 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5657c5db-91a2-4254-a35c-736faa70763c" path="/var/lib/kubelet/pods/5657c5db-91a2-4254-a35c-736faa70763c/volumes" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.346009 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.346685 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.346727 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.347792 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.347865 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" gracePeriod=600 Jan 31 15:24:07 crc kubenswrapper[4735]: E0131 15:24:07.490453 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.532741 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" exitCode=0 Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.532821 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0"} Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.532888 4735 scope.go:117] "RemoveContainer" containerID="5ccb5e4617079b8faaeabd9785064c7f23d1d1ea0a27109f0408b89c564bef53" Jan 31 15:24:07 crc kubenswrapper[4735]: I0131 15:24:07.534082 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:24:07 crc kubenswrapper[4735]: E0131 15:24:07.534481 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:24:22 crc kubenswrapper[4735]: I0131 15:24:22.540680 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:24:22 crc kubenswrapper[4735]: E0131 15:24:22.541883 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:24:23 crc kubenswrapper[4735]: I0131 15:24:23.063147 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2bbb-account-create-update-tjlxh"] Jan 31 15:24:23 crc kubenswrapper[4735]: I0131 15:24:23.077438 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2bbb-account-create-update-tjlxh"] Jan 31 15:24:23 crc kubenswrapper[4735]: I0131 15:24:23.559875 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc83c3a-8612-4615-9c72-64e73fd22e8a" path="/var/lib/kubelet/pods/4dc83c3a-8612-4615-9c72-64e73fd22e8a/volumes" Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.043037 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7d1b-account-create-update-6rqlg"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.056547 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54d7-account-create-update-dqzl9"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.071498 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9jmx2"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.080581 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g9wxj"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.089284 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vrsn9"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.097127 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7d1b-account-create-update-6rqlg"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.104702 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-54d7-account-create-update-dqzl9"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.111881 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g9wxj"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.118995 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9jmx2"] Jan 31 15:24:24 crc kubenswrapper[4735]: I0131 15:24:24.125871 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vrsn9"] Jan 31 15:24:25 crc kubenswrapper[4735]: I0131 15:24:25.554810 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074ec696-8193-4cde-a5d1-a1b892a078ab" path="/var/lib/kubelet/pods/074ec696-8193-4cde-a5d1-a1b892a078ab/volumes" Jan 31 15:24:25 crc kubenswrapper[4735]: I0131 15:24:25.556478 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18645daa-dccb-485c-922e-847af9f4c6a0" path="/var/lib/kubelet/pods/18645daa-dccb-485c-922e-847af9f4c6a0/volumes" Jan 31 15:24:25 crc kubenswrapper[4735]: I0131 15:24:25.557704 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4046fd4c-0329-4995-9691-0fee238a9907" path="/var/lib/kubelet/pods/4046fd4c-0329-4995-9691-0fee238a9907/volumes" Jan 31 15:24:25 crc kubenswrapper[4735]: I0131 15:24:25.558809 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49118420-7fc9-4bb6-8bb5-9d90dc2605f0" path="/var/lib/kubelet/pods/49118420-7fc9-4bb6-8bb5-9d90dc2605f0/volumes" Jan 31 15:24:25 crc kubenswrapper[4735]: I0131 15:24:25.561277 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41448eb-e005-42d7-b16d-06a4d829a6b2" path="/var/lib/kubelet/pods/b41448eb-e005-42d7-b16d-06a4d829a6b2/volumes" Jan 31 15:24:26 crc kubenswrapper[4735]: I0131 15:24:26.766952 4735 generic.go:334] "Generic (PLEG): container finished" podID="f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" containerID="aff11f0d581f8d18218bd2b0b65d1876a082f7bf09434bcc3d4cc3c1b0bece52" exitCode=0 Jan 31 15:24:26 crc kubenswrapper[4735]: I0131 15:24:26.767020 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" event={"ID":"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83","Type":"ContainerDied","Data":"aff11f0d581f8d18218bd2b0b65d1876a082f7bf09434bcc3d4cc3c1b0bece52"} Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.299732 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.373329 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgkfk\" (UniqueName: \"kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk\") pod \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.373432 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam\") pod \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.373577 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory\") pod \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\" (UID: \"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83\") " Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.379209 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk" (OuterVolumeSpecName: "kube-api-access-cgkfk") pod "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" (UID: "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83"). InnerVolumeSpecName "kube-api-access-cgkfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.400625 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" (UID: "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.407388 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory" (OuterVolumeSpecName: "inventory") pod "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" (UID: "f3889565-cd9c-4a0a-80d5-09bc3f0e0a83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.476362 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.476403 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgkfk\" (UniqueName: \"kubernetes.io/projected/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-kube-api-access-cgkfk\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.476418 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3889565-cd9c-4a0a-80d5-09bc3f0e0a83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.838216 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" event={"ID":"f3889565-cd9c-4a0a-80d5-09bc3f0e0a83","Type":"ContainerDied","Data":"6723b0efd157599d76e8a32ec776a8c8123b870e78cf2b2efe1434054935f0ec"} Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.838486 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6723b0efd157599d76e8a32ec776a8c8123b870e78cf2b2efe1434054935f0ec" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.838310 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n87t9" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894162 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq"] Jan 31 15:24:28 crc kubenswrapper[4735]: E0131 15:24:28.894601 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894615 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 15:24:28 crc kubenswrapper[4735]: E0131 15:24:28.894633 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="extract-utilities" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894640 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="extract-utilities" Jan 31 15:24:28 crc kubenswrapper[4735]: E0131 15:24:28.894652 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="registry-server" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894658 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="registry-server" Jan 31 15:24:28 crc kubenswrapper[4735]: E0131 15:24:28.894680 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="extract-content" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894687 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="extract-content" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894851 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3889565-cd9c-4a0a-80d5-09bc3f0e0a83" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.894864 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5657c5db-91a2-4254-a35c-736faa70763c" containerName="registry-server" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.895592 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.901144 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.901627 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.901806 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.901954 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.906012 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq"] Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.984226 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphtg\" (UniqueName: \"kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.984283 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:28 crc kubenswrapper[4735]: I0131 15:24:28.984507 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.086247 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.086395 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.086777 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphtg\" (UniqueName: \"kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.093088 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.093254 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.109927 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphtg\" (UniqueName: \"kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hspmq\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.223083 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:24:29 crc kubenswrapper[4735]: W0131 15:24:29.773246 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda56e09c3_9ece_4ce0_9a98_1d97bd4b24d7.slice/crio-00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83 WatchSource:0}: Error finding container 00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83: Status 404 returned error can't find the container with id 00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83 Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.778329 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq"] Jan 31 15:24:29 crc kubenswrapper[4735]: I0131 15:24:29.848206 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" event={"ID":"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7","Type":"ContainerStarted","Data":"00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83"} Jan 31 15:24:30 crc kubenswrapper[4735]: I0131 15:24:30.859690 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" event={"ID":"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7","Type":"ContainerStarted","Data":"195aec68ba192b26d3262d9d01c84e818da9ae856abc0e02e67f6d860358fa85"} Jan 31 15:24:31 crc kubenswrapper[4735]: I0131 15:24:31.042415 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" podStartSLOduration=2.655471691 podStartE2EDuration="3.042388139s" podCreationTimestamp="2026-01-31 15:24:28 +0000 UTC" firstStartedPulling="2026-01-31 15:24:29.778486413 +0000 UTC m=+1555.551815465" lastFinishedPulling="2026-01-31 15:24:30.165402871 +0000 UTC m=+1555.938731913" observedRunningTime="2026-01-31 15:24:30.892668639 +0000 UTC m=+1556.665997681" watchObservedRunningTime="2026-01-31 15:24:31.042388139 +0000 UTC m=+1556.815717191" Jan 31 15:24:31 crc kubenswrapper[4735]: I0131 15:24:31.045779 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4vc7k"] Jan 31 15:24:31 crc kubenswrapper[4735]: I0131 15:24:31.055247 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4vc7k"] Jan 31 15:24:31 crc kubenswrapper[4735]: I0131 15:24:31.564135 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328eb3c0-bc56-40b1-88f4-e84da75dcffa" path="/var/lib/kubelet/pods/328eb3c0-bc56-40b1-88f4-e84da75dcffa/volumes" Jan 31 15:24:34 crc kubenswrapper[4735]: I0131 15:24:34.540556 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:24:34 crc kubenswrapper[4735]: E0131 15:24:34.541618 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.194145 4735 scope.go:117] "RemoveContainer" containerID="23baca0f9f19850cf785bc01f2a02841efe9ab9dc350d661019cd8fe67c12a02" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.238638 4735 scope.go:117] "RemoveContainer" containerID="1714d1843e4767f6d87e1ba2028f3fe764d64783562808b9a70159b25869d64d" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.289881 4735 scope.go:117] "RemoveContainer" containerID="20a37cfe493a4b03f6e7b1cf7846fb7dadfc89232b92782836aa4a60e9ecd922" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.331349 4735 scope.go:117] "RemoveContainer" containerID="771602732d55e74d2e20e2504813f92fa41341d8163e55aa8691445729447e28" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.363272 4735 scope.go:117] "RemoveContainer" containerID="67e287aeb9cc826d9213d44f56a2284064f60b580e3d1baf5808322cda6bcf3c" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.420576 4735 scope.go:117] "RemoveContainer" containerID="c1d8a66fc16f4e29f1ce645b96d4c8da320ba74f27459422770ace60ffb9e2d0" Jan 31 15:24:39 crc kubenswrapper[4735]: I0131 15:24:39.450960 4735 scope.go:117] "RemoveContainer" containerID="6dcc8b2f271b858f059b517d1a66c42ca2b9b6839a1eec104d32ff74a2471c42" Jan 31 15:24:47 crc kubenswrapper[4735]: I0131 15:24:47.069901 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6l488"] Jan 31 15:24:47 crc kubenswrapper[4735]: I0131 15:24:47.081824 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6l488"] Jan 31 15:24:47 crc kubenswrapper[4735]: I0131 15:24:47.540745 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:24:47 crc kubenswrapper[4735]: E0131 15:24:47.541360 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:24:47 crc kubenswrapper[4735]: I0131 15:24:47.561469 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d29a2f-1ba4-48e8-8c33-c1a96440ae36" path="/var/lib/kubelet/pods/03d29a2f-1ba4-48e8-8c33-c1a96440ae36/volumes" Jan 31 15:24:50 crc kubenswrapper[4735]: I0131 15:24:50.035286 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jq6db"] Jan 31 15:24:50 crc kubenswrapper[4735]: I0131 15:24:50.046724 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jq6db"] Jan 31 15:24:51 crc kubenswrapper[4735]: I0131 15:24:51.560186 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecf62c3-6e9c-44cb-9963-6ac8a95baa14" path="/var/lib/kubelet/pods/cecf62c3-6e9c-44cb-9963-6ac8a95baa14/volumes" Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.045187 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d2zgp"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.062839 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9777-account-create-update-wjtv8"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.075837 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vrmlh"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.085567 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b354-account-create-update-dsnth"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.095294 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e746-account-create-update-djp42"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.104602 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d2zgp"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.112709 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9777-account-create-update-wjtv8"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.121151 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vrmlh"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.129791 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b354-account-create-update-dsnth"] Jan 31 15:24:54 crc kubenswrapper[4735]: I0131 15:24:54.142338 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e746-account-create-update-djp42"] Jan 31 15:24:55 crc kubenswrapper[4735]: I0131 15:24:55.552525 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b376088-4875-44ac-a0b2-2e80ffa08acf" path="/var/lib/kubelet/pods/2b376088-4875-44ac-a0b2-2e80ffa08acf/volumes" Jan 31 15:24:55 crc kubenswrapper[4735]: I0131 15:24:55.553239 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b16dfd-fefb-49ee-adde-9d244ca8ccbe" path="/var/lib/kubelet/pods/41b16dfd-fefb-49ee-adde-9d244ca8ccbe/volumes" Jan 31 15:24:55 crc kubenswrapper[4735]: I0131 15:24:55.554100 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bc38a4-6212-4fa2-afb1-e8da3e3271a6" path="/var/lib/kubelet/pods/d0bc38a4-6212-4fa2-afb1-e8da3e3271a6/volumes" Jan 31 15:24:55 crc kubenswrapper[4735]: I0131 15:24:55.554896 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e72d3a05-ea57-4446-96f6-731172bde4a3" path="/var/lib/kubelet/pods/e72d3a05-ea57-4446-96f6-731172bde4a3/volumes" Jan 31 15:24:55 crc kubenswrapper[4735]: I0131 15:24:55.555649 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6922b6e-24c3-4a4d-99fd-7027a8b33273" path="/var/lib/kubelet/pods/f6922b6e-24c3-4a4d-99fd-7027a8b33273/volumes" Jan 31 15:24:58 crc kubenswrapper[4735]: I0131 15:24:58.066126 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-79q5z"] Jan 31 15:24:58 crc kubenswrapper[4735]: I0131 15:24:58.081345 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-79q5z"] Jan 31 15:24:59 crc kubenswrapper[4735]: I0131 15:24:59.563363 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893a5a30-2ca2-4d47-9882-2bf19e0233ad" path="/var/lib/kubelet/pods/893a5a30-2ca2-4d47-9882-2bf19e0233ad/volumes" Jan 31 15:25:00 crc kubenswrapper[4735]: I0131 15:25:00.540223 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:25:00 crc kubenswrapper[4735]: E0131 15:25:00.540870 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.793290 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.795634 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.810081 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.873146 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866hv\" (UniqueName: \"kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.873492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.873689 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.976687 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.976837 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.977046 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866hv\" (UniqueName: \"kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.977274 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:08 crc kubenswrapper[4735]: I0131 15:25:08.977300 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:09 crc kubenswrapper[4735]: I0131 15:25:09.004254 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866hv\" (UniqueName: \"kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv\") pod \"certified-operators-rqfkd\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:09 crc kubenswrapper[4735]: I0131 15:25:09.125231 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:09 crc kubenswrapper[4735]: I0131 15:25:09.616896 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:09 crc kubenswrapper[4735]: W0131 15:25:09.618067 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod170545bf_98b2_402f_a124_8e2e7e087782.slice/crio-fadcefe9d072ae366ba170bfcde39fa339c9413e37d6a09e8f2307577a68a3ff WatchSource:0}: Error finding container fadcefe9d072ae366ba170bfcde39fa339c9413e37d6a09e8f2307577a68a3ff: Status 404 returned error can't find the container with id fadcefe9d072ae366ba170bfcde39fa339c9413e37d6a09e8f2307577a68a3ff Jan 31 15:25:10 crc kubenswrapper[4735]: I0131 15:25:10.292014 4735 generic.go:334] "Generic (PLEG): container finished" podID="170545bf-98b2-402f-a124-8e2e7e087782" containerID="629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4" exitCode=0 Jan 31 15:25:10 crc kubenswrapper[4735]: I0131 15:25:10.292272 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerDied","Data":"629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4"} Jan 31 15:25:10 crc kubenswrapper[4735]: I0131 15:25:10.292296 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerStarted","Data":"fadcefe9d072ae366ba170bfcde39fa339c9413e37d6a09e8f2307577a68a3ff"} Jan 31 15:25:11 crc kubenswrapper[4735]: I0131 15:25:11.305948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerStarted","Data":"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d"} Jan 31 15:25:12 crc kubenswrapper[4735]: I0131 15:25:12.316683 4735 generic.go:334] "Generic (PLEG): container finished" podID="170545bf-98b2-402f-a124-8e2e7e087782" containerID="936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d" exitCode=0 Jan 31 15:25:12 crc kubenswrapper[4735]: I0131 15:25:12.316744 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerDied","Data":"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d"} Jan 31 15:25:13 crc kubenswrapper[4735]: I0131 15:25:13.329324 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerStarted","Data":"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7"} Jan 31 15:25:14 crc kubenswrapper[4735]: I0131 15:25:14.539640 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:25:14 crc kubenswrapper[4735]: E0131 15:25:14.540103 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.126137 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.126684 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.231664 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.250170 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqfkd" podStartSLOduration=8.697293409 podStartE2EDuration="11.250150503s" podCreationTimestamp="2026-01-31 15:25:08 +0000 UTC" firstStartedPulling="2026-01-31 15:25:10.294311111 +0000 UTC m=+1596.067640153" lastFinishedPulling="2026-01-31 15:25:12.847168205 +0000 UTC m=+1598.620497247" observedRunningTime="2026-01-31 15:25:13.349896512 +0000 UTC m=+1599.123225564" watchObservedRunningTime="2026-01-31 15:25:19.250150503 +0000 UTC m=+1605.023479545" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.418951 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:19 crc kubenswrapper[4735]: I0131 15:25:19.464564 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.404691 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqfkd" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="registry-server" containerID="cri-o://8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7" gracePeriod=2 Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.860672 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.936054 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866hv\" (UniqueName: \"kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv\") pod \"170545bf-98b2-402f-a124-8e2e7e087782\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.936311 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content\") pod \"170545bf-98b2-402f-a124-8e2e7e087782\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.936382 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities\") pod \"170545bf-98b2-402f-a124-8e2e7e087782\" (UID: \"170545bf-98b2-402f-a124-8e2e7e087782\") " Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.937346 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities" (OuterVolumeSpecName: "utilities") pod "170545bf-98b2-402f-a124-8e2e7e087782" (UID: "170545bf-98b2-402f-a124-8e2e7e087782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:21 crc kubenswrapper[4735]: I0131 15:25:21.944675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv" (OuterVolumeSpecName: "kube-api-access-866hv") pod "170545bf-98b2-402f-a124-8e2e7e087782" (UID: "170545bf-98b2-402f-a124-8e2e7e087782"). InnerVolumeSpecName "kube-api-access-866hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.038506 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.038546 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-866hv\" (UniqueName: \"kubernetes.io/projected/170545bf-98b2-402f-a124-8e2e7e087782-kube-api-access-866hv\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.414554 4735 generic.go:334] "Generic (PLEG): container finished" podID="170545bf-98b2-402f-a124-8e2e7e087782" containerID="8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7" exitCode=0 Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.414594 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerDied","Data":"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7"} Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.414618 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfkd" event={"ID":"170545bf-98b2-402f-a124-8e2e7e087782","Type":"ContainerDied","Data":"fadcefe9d072ae366ba170bfcde39fa339c9413e37d6a09e8f2307577a68a3ff"} Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.414634 4735 scope.go:117] "RemoveContainer" containerID="8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.414756 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfkd" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.438840 4735 scope.go:117] "RemoveContainer" containerID="936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.444246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "170545bf-98b2-402f-a124-8e2e7e087782" (UID: "170545bf-98b2-402f-a124-8e2e7e087782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.446293 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170545bf-98b2-402f-a124-8e2e7e087782-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.469912 4735 scope.go:117] "RemoveContainer" containerID="629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.515262 4735 scope.go:117] "RemoveContainer" containerID="8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7" Jan 31 15:25:22 crc kubenswrapper[4735]: E0131 15:25:22.515839 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7\": container with ID starting with 8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7 not found: ID does not exist" containerID="8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.515904 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7"} err="failed to get container status \"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7\": rpc error: code = NotFound desc = could not find container \"8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7\": container with ID starting with 8336bcc7308333ee7bb88f0172533368be71d19679448e2ea9c10f428ae161e7 not found: ID does not exist" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.515945 4735 scope.go:117] "RemoveContainer" containerID="936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d" Jan 31 15:25:22 crc kubenswrapper[4735]: E0131 15:25:22.516501 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d\": container with ID starting with 936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d not found: ID does not exist" containerID="936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.516541 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d"} err="failed to get container status \"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d\": rpc error: code = NotFound desc = could not find container \"936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d\": container with ID starting with 936ee37266c04d78edc689cdb46f55f61b2d4dd53e924fdc0638364af543c99d not found: ID does not exist" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.516568 4735 scope.go:117] "RemoveContainer" containerID="629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4" Jan 31 15:25:22 crc kubenswrapper[4735]: E0131 15:25:22.516892 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4\": container with ID starting with 629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4 not found: ID does not exist" containerID="629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.516937 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4"} err="failed to get container status \"629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4\": rpc error: code = NotFound desc = could not find container \"629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4\": container with ID starting with 629a49f1291efa4c7a14c082da0f9df9605eaa939f147873f84f37da1e0667c4 not found: ID does not exist" Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.768510 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:22 crc kubenswrapper[4735]: I0131 15:25:22.777935 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqfkd"] Jan 31 15:25:23 crc kubenswrapper[4735]: I0131 15:25:23.551103 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170545bf-98b2-402f-a124-8e2e7e087782" path="/var/lib/kubelet/pods/170545bf-98b2-402f-a124-8e2e7e087782/volumes" Jan 31 15:25:27 crc kubenswrapper[4735]: I0131 15:25:27.540612 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:25:27 crc kubenswrapper[4735]: E0131 15:25:27.541728 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:25:36 crc kubenswrapper[4735]: I0131 15:25:36.577929 4735 generic.go:334] "Generic (PLEG): container finished" podID="a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" containerID="195aec68ba192b26d3262d9d01c84e818da9ae856abc0e02e67f6d860358fa85" exitCode=0 Jan 31 15:25:36 crc kubenswrapper[4735]: I0131 15:25:36.578053 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" event={"ID":"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7","Type":"ContainerDied","Data":"195aec68ba192b26d3262d9d01c84e818da9ae856abc0e02e67f6d860358fa85"} Jan 31 15:25:37 crc kubenswrapper[4735]: I0131 15:25:37.982649 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.115159 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphtg\" (UniqueName: \"kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg\") pod \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.115253 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory\") pod \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.115334 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam\") pod \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\" (UID: \"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7\") " Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.120087 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg" (OuterVolumeSpecName: "kube-api-access-jphtg") pod "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" (UID: "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7"). InnerVolumeSpecName "kube-api-access-jphtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.140090 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory" (OuterVolumeSpecName: "inventory") pod "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" (UID: "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.163827 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" (UID: "a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.218044 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphtg\" (UniqueName: \"kubernetes.io/projected/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-kube-api-access-jphtg\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.218092 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.218108 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.600951 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" event={"ID":"a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7","Type":"ContainerDied","Data":"00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83"} Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.601003 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00fa81732bba476c8d606fbed10b485f39a82341948473b49f8baa5e9a8a7f83" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.601067 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hspmq" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.718392 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l"] Jan 31 15:25:38 crc kubenswrapper[4735]: E0131 15:25:38.718845 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.718868 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:38 crc kubenswrapper[4735]: E0131 15:25:38.718885 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="registry-server" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.718893 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="registry-server" Jan 31 15:25:38 crc kubenswrapper[4735]: E0131 15:25:38.718910 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="extract-utilities" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.718919 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="extract-utilities" Jan 31 15:25:38 crc kubenswrapper[4735]: E0131 15:25:38.718945 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="extract-content" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.718952 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="extract-content" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.719208 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="170545bf-98b2-402f-a124-8e2e7e087782" containerName="registry-server" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.719225 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.720011 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.721867 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.722442 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.723018 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.723614 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.742016 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l"] Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.829149 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.829499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnlq\" (UniqueName: \"kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.829531 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.931511 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.931622 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnlq\" (UniqueName: \"kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.931657 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.935289 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.936528 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:38 crc kubenswrapper[4735]: I0131 15:25:38.960806 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnlq\" (UniqueName: \"kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sg76l\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.044752 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.392547 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l"] Jan 31 15:25:39 crc kubenswrapper[4735]: W0131 15:25:39.395699 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02498a0_04e3_4062_b19f_aa22ab9544a3.slice/crio-3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89 WatchSource:0}: Error finding container 3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89: Status 404 returned error can't find the container with id 3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89 Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.613462 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" event={"ID":"a02498a0-04e3-4062-b19f-aa22ab9544a3","Type":"ContainerStarted","Data":"3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89"} Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.667807 4735 scope.go:117] "RemoveContainer" containerID="f79f4e3ecbbdea61a0e862aedc731a56ae711faafccecf904854c1e09d472ae5" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.692562 4735 scope.go:117] "RemoveContainer" containerID="99c6677f1a611414472452da90d84a508b8b8138917ee7bd4d2772860d17b364" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.718705 4735 scope.go:117] "RemoveContainer" containerID="68746ad976b78121b441a768bc5103b19acad0d7d0016529e011d3e53b0784d4" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.749997 4735 scope.go:117] "RemoveContainer" containerID="624bfdd3a51382b6969fa68ed12cc671dac248d54dcd0b2e25aab0853df3ceef" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.768413 4735 scope.go:117] "RemoveContainer" containerID="87ba1d3c10ce22688b82db1035519bc40e7da0cd4bc6de78c90cb0340a879cf8" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.790260 4735 scope.go:117] "RemoveContainer" containerID="b0fce810328aa92d3f4797cca576c62b76d3330c47dbf3fd3beb9e086c82c08a" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.807045 4735 scope.go:117] "RemoveContainer" containerID="ec2c1b870356a5e4021599447fd42bf189a0b19e7629f839cfc7182e4535e1c4" Jan 31 15:25:39 crc kubenswrapper[4735]: I0131 15:25:39.847450 4735 scope.go:117] "RemoveContainer" containerID="af6c5a0e63026f181a446ddc54de04ae0289ecdcfd8098db2e5f9e26c5fe0ae6" Jan 31 15:25:40 crc kubenswrapper[4735]: I0131 15:25:40.540352 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:25:40 crc kubenswrapper[4735]: E0131 15:25:40.541251 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:25:40 crc kubenswrapper[4735]: I0131 15:25:40.626515 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" event={"ID":"a02498a0-04e3-4062-b19f-aa22ab9544a3","Type":"ContainerStarted","Data":"9d8dcf42fb58e45d11d1aae0e32bc029d569f58682422436bc2064073c6be4c5"} Jan 31 15:25:40 crc kubenswrapper[4735]: I0131 15:25:40.654342 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" podStartSLOduration=2.225174957 podStartE2EDuration="2.654320239s" podCreationTimestamp="2026-01-31 15:25:38 +0000 UTC" firstStartedPulling="2026-01-31 15:25:39.399184779 +0000 UTC m=+1625.172513821" lastFinishedPulling="2026-01-31 15:25:39.828330061 +0000 UTC m=+1625.601659103" observedRunningTime="2026-01-31 15:25:40.642607454 +0000 UTC m=+1626.415936536" watchObservedRunningTime="2026-01-31 15:25:40.654320239 +0000 UTC m=+1626.427649291" Jan 31 15:25:44 crc kubenswrapper[4735]: I0131 15:25:44.675298 4735 generic.go:334] "Generic (PLEG): container finished" podID="a02498a0-04e3-4062-b19f-aa22ab9544a3" containerID="9d8dcf42fb58e45d11d1aae0e32bc029d569f58682422436bc2064073c6be4c5" exitCode=0 Jan 31 15:25:44 crc kubenswrapper[4735]: I0131 15:25:44.675403 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" event={"ID":"a02498a0-04e3-4062-b19f-aa22ab9544a3","Type":"ContainerDied","Data":"9d8dcf42fb58e45d11d1aae0e32bc029d569f58682422436bc2064073c6be4c5"} Jan 31 15:25:45 crc kubenswrapper[4735]: I0131 15:25:45.070688 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rzt6q"] Jan 31 15:25:45 crc kubenswrapper[4735]: I0131 15:25:45.086203 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rzt6q"] Jan 31 15:25:45 crc kubenswrapper[4735]: I0131 15:25:45.558837 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce096009-4177-43cf-a0c2-76f2888ebea1" path="/var/lib/kubelet/pods/ce096009-4177-43cf-a0c2-76f2888ebea1/volumes" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.127370 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.187844 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory\") pod \"a02498a0-04e3-4062-b19f-aa22ab9544a3\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.188043 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnlq\" (UniqueName: \"kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq\") pod \"a02498a0-04e3-4062-b19f-aa22ab9544a3\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.188248 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam\") pod \"a02498a0-04e3-4062-b19f-aa22ab9544a3\" (UID: \"a02498a0-04e3-4062-b19f-aa22ab9544a3\") " Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.192950 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq" (OuterVolumeSpecName: "kube-api-access-psnlq") pod "a02498a0-04e3-4062-b19f-aa22ab9544a3" (UID: "a02498a0-04e3-4062-b19f-aa22ab9544a3"). InnerVolumeSpecName "kube-api-access-psnlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.214726 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory" (OuterVolumeSpecName: "inventory") pod "a02498a0-04e3-4062-b19f-aa22ab9544a3" (UID: "a02498a0-04e3-4062-b19f-aa22ab9544a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.230238 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a02498a0-04e3-4062-b19f-aa22ab9544a3" (UID: "a02498a0-04e3-4062-b19f-aa22ab9544a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.290635 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.290682 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a02498a0-04e3-4062-b19f-aa22ab9544a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.290692 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psnlq\" (UniqueName: \"kubernetes.io/projected/a02498a0-04e3-4062-b19f-aa22ab9544a3-kube-api-access-psnlq\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.716267 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" event={"ID":"a02498a0-04e3-4062-b19f-aa22ab9544a3","Type":"ContainerDied","Data":"3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89"} Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.716325 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c1a9ac95fa355149294a5f899454357e2ddd02e2db310ab29d5ec3ef5a41d89" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.716382 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sg76l" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.792217 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq"] Jan 31 15:25:46 crc kubenswrapper[4735]: E0131 15:25:46.793729 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02498a0-04e3-4062-b19f-aa22ab9544a3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.795738 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02498a0-04e3-4062-b19f-aa22ab9544a3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.796268 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02498a0-04e3-4062-b19f-aa22ab9544a3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.797085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.801554 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.801983 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.802051 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.802208 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.825629 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq"] Jan 31 15:25:46 crc kubenswrapper[4735]: E0131 15:25:46.904246 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02498a0_04e3_4062_b19f_aa22ab9544a3.slice\": RecentStats: unable to find data in memory cache]" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.906296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.906372 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvlbp\" (UniqueName: \"kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:46 crc kubenswrapper[4735]: I0131 15:25:46.906716 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.012489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.012641 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvlbp\" (UniqueName: \"kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.012869 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.020005 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.021674 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.044540 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-xrbt2"] Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.047213 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvlbp\" (UniqueName: \"kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-964lq\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.052549 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-xrbt2"] Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.118901 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.553240 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5fd5ef-5566-4f7c-8e51-ed296536a540" path="/var/lib/kubelet/pods/2a5fd5ef-5566-4f7c-8e51-ed296536a540/volumes" Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.639396 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq"] Jan 31 15:25:47 crc kubenswrapper[4735]: I0131 15:25:47.726009 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" event={"ID":"7737dd79-8b1c-448a-a81d-5a06b58e32e1","Type":"ContainerStarted","Data":"d35a64bf640b502eda6d8985dbe47d076340707ab7d2fc76a907dc0d3ad03b34"} Jan 31 15:25:48 crc kubenswrapper[4735]: I0131 15:25:48.757130 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" event={"ID":"7737dd79-8b1c-448a-a81d-5a06b58e32e1","Type":"ContainerStarted","Data":"48a152dfdfff24ef3d7e9044bdf01dbbb9c2ad08bd7a54520f928bd54f542866"} Jan 31 15:25:48 crc kubenswrapper[4735]: I0131 15:25:48.811943 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" podStartSLOduration=2.3880744959999998 podStartE2EDuration="2.811917276s" podCreationTimestamp="2026-01-31 15:25:46 +0000 UTC" firstStartedPulling="2026-01-31 15:25:47.641538776 +0000 UTC m=+1633.414867818" lastFinishedPulling="2026-01-31 15:25:48.065381556 +0000 UTC m=+1633.838710598" observedRunningTime="2026-01-31 15:25:48.791096749 +0000 UTC m=+1634.564425831" watchObservedRunningTime="2026-01-31 15:25:48.811917276 +0000 UTC m=+1634.585246338" Jan 31 15:25:49 crc kubenswrapper[4735]: I0131 15:25:49.053280 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tgk2f"] Jan 31 15:25:49 crc kubenswrapper[4735]: I0131 15:25:49.069374 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tgk2f"] Jan 31 15:25:49 crc kubenswrapper[4735]: I0131 15:25:49.559021 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7152602-a51d-4f77-894f-6514ac5816b7" path="/var/lib/kubelet/pods/d7152602-a51d-4f77-894f-6514ac5816b7/volumes" Jan 31 15:25:54 crc kubenswrapper[4735]: I0131 15:25:54.540368 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:25:54 crc kubenswrapper[4735]: E0131 15:25:54.541387 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:25:59 crc kubenswrapper[4735]: I0131 15:25:59.032816 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bhf6c"] Jan 31 15:25:59 crc kubenswrapper[4735]: I0131 15:25:59.042009 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bhf6c"] Jan 31 15:25:59 crc kubenswrapper[4735]: I0131 15:25:59.560248 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98" path="/var/lib/kubelet/pods/c42c2dc5-da36-4bb1-9e9a-817e7f1aaa98/volumes" Jan 31 15:26:01 crc kubenswrapper[4735]: I0131 15:26:01.032999 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tt9w8"] Jan 31 15:26:01 crc kubenswrapper[4735]: I0131 15:26:01.043536 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tt9w8"] Jan 31 15:26:01 crc kubenswrapper[4735]: I0131 15:26:01.557243 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972df6a4-e6ad-41de-9573-b80779a22bd3" path="/var/lib/kubelet/pods/972df6a4-e6ad-41de-9573-b80779a22bd3/volumes" Jan 31 15:26:08 crc kubenswrapper[4735]: I0131 15:26:08.540451 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:26:08 crc kubenswrapper[4735]: E0131 15:26:08.541507 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:26:23 crc kubenswrapper[4735]: I0131 15:26:23.567973 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:26:23 crc kubenswrapper[4735]: E0131 15:26:23.569060 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:26:25 crc kubenswrapper[4735]: I0131 15:26:25.126376 4735 generic.go:334] "Generic (PLEG): container finished" podID="7737dd79-8b1c-448a-a81d-5a06b58e32e1" containerID="48a152dfdfff24ef3d7e9044bdf01dbbb9c2ad08bd7a54520f928bd54f542866" exitCode=0 Jan 31 15:26:25 crc kubenswrapper[4735]: I0131 15:26:25.126473 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" event={"ID":"7737dd79-8b1c-448a-a81d-5a06b58e32e1","Type":"ContainerDied","Data":"48a152dfdfff24ef3d7e9044bdf01dbbb9c2ad08bd7a54520f928bd54f542866"} Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.606759 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.684349 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory\") pod \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.684613 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvlbp\" (UniqueName: \"kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp\") pod \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.684692 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam\") pod \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\" (UID: \"7737dd79-8b1c-448a-a81d-5a06b58e32e1\") " Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.690938 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp" (OuterVolumeSpecName: "kube-api-access-hvlbp") pod "7737dd79-8b1c-448a-a81d-5a06b58e32e1" (UID: "7737dd79-8b1c-448a-a81d-5a06b58e32e1"). InnerVolumeSpecName "kube-api-access-hvlbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.721579 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7737dd79-8b1c-448a-a81d-5a06b58e32e1" (UID: "7737dd79-8b1c-448a-a81d-5a06b58e32e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.737583 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory" (OuterVolumeSpecName: "inventory") pod "7737dd79-8b1c-448a-a81d-5a06b58e32e1" (UID: "7737dd79-8b1c-448a-a81d-5a06b58e32e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.786576 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.786609 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvlbp\" (UniqueName: \"kubernetes.io/projected/7737dd79-8b1c-448a-a81d-5a06b58e32e1-kube-api-access-hvlbp\") on node \"crc\" DevicePath \"\"" Jan 31 15:26:26 crc kubenswrapper[4735]: I0131 15:26:26.786621 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7737dd79-8b1c-448a-a81d-5a06b58e32e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.147669 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" event={"ID":"7737dd79-8b1c-448a-a81d-5a06b58e32e1","Type":"ContainerDied","Data":"d35a64bf640b502eda6d8985dbe47d076340707ab7d2fc76a907dc0d3ad03b34"} Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.147735 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d35a64bf640b502eda6d8985dbe47d076340707ab7d2fc76a907dc0d3ad03b34" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.147839 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-964lq" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.249109 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9"] Jan 31 15:26:27 crc kubenswrapper[4735]: E0131 15:26:27.249705 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7737dd79-8b1c-448a-a81d-5a06b58e32e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.249723 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7737dd79-8b1c-448a-a81d-5a06b58e32e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.249920 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7737dd79-8b1c-448a-a81d-5a06b58e32e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.250548 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.253040 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.253473 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.253652 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.255326 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.260516 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9"] Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.296765 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.296964 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sbz\" (UniqueName: \"kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.297110 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.399153 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.399219 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56sbz\" (UniqueName: \"kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.399267 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.403736 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.411083 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.415776 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56sbz\" (UniqueName: \"kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:27 crc kubenswrapper[4735]: I0131 15:26:27.573369 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:26:28 crc kubenswrapper[4735]: I0131 15:26:28.145038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9"] Jan 31 15:26:28 crc kubenswrapper[4735]: I0131 15:26:28.159534 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" event={"ID":"1898e10e-7cb4-453e-84f8-ee45e1b109a3","Type":"ContainerStarted","Data":"9fbc8655d4c51ade205d6911dd8b1ea2cecb5d1e6f59ecf07454f0dad50ad1a5"} Jan 31 15:26:29 crc kubenswrapper[4735]: I0131 15:26:29.171300 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" event={"ID":"1898e10e-7cb4-453e-84f8-ee45e1b109a3","Type":"ContainerStarted","Data":"09329272c7799cc828db69c403aea47586837d4c13f91ffbf55b19da86d88a40"} Jan 31 15:26:29 crc kubenswrapper[4735]: I0131 15:26:29.212309 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" podStartSLOduration=1.821940912 podStartE2EDuration="2.212280132s" podCreationTimestamp="2026-01-31 15:26:27 +0000 UTC" firstStartedPulling="2026-01-31 15:26:28.15106993 +0000 UTC m=+1673.924398992" lastFinishedPulling="2026-01-31 15:26:28.54140914 +0000 UTC m=+1674.314738212" observedRunningTime="2026-01-31 15:26:29.202366847 +0000 UTC m=+1674.975695909" watchObservedRunningTime="2026-01-31 15:26:29.212280132 +0000 UTC m=+1674.985609184" Jan 31 15:26:38 crc kubenswrapper[4735]: I0131 15:26:38.540292 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:26:38 crc kubenswrapper[4735]: E0131 15:26:38.540997 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:26:40 crc kubenswrapper[4735]: I0131 15:26:40.119152 4735 scope.go:117] "RemoveContainer" containerID="ad591be32839c11079c08e1f7e61178986f7c56fc7415a2091ac9c13b27c614b" Jan 31 15:26:40 crc kubenswrapper[4735]: I0131 15:26:40.170151 4735 scope.go:117] "RemoveContainer" containerID="5460f3b0dbc29ce53229a15e1d9182b9d50ea1b509f665e81cce8a75498bc15d" Jan 31 15:26:40 crc kubenswrapper[4735]: I0131 15:26:40.235517 4735 scope.go:117] "RemoveContainer" containerID="955c680b58a38b7d1fa55ba801249925c6aaf884941f0dcb32f8676ac0e4aefe" Jan 31 15:26:40 crc kubenswrapper[4735]: I0131 15:26:40.274014 4735 scope.go:117] "RemoveContainer" containerID="9b64367da740770fbbd72062799ce4fab6ac349cd2622f10d0fd2572037abfa1" Jan 31 15:26:40 crc kubenswrapper[4735]: I0131 15:26:40.325415 4735 scope.go:117] "RemoveContainer" containerID="31748d0a3288e5b8c7638f7250f52d80646f674cdc5d2f03b5998bcff84972c1" Jan 31 15:26:50 crc kubenswrapper[4735]: I0131 15:26:50.540108 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:26:50 crc kubenswrapper[4735]: E0131 15:26:50.540829 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.058751 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6hwhn"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.066459 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a96c-account-create-update-lkt5p"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.093014 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6hwhn"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.100844 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-m75l9"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.108260 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zp7s7"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.115660 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-m75l9"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.122722 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a96c-account-create-update-lkt5p"] Jan 31 15:26:54 crc kubenswrapper[4735]: I0131 15:26:54.130611 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zp7s7"] Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.056508 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e898-account-create-update-bbb72"] Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.071142 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e898-account-create-update-bbb72"] Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.079819 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5ebc-account-create-update-pxltn"] Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.090389 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5ebc-account-create-update-pxltn"] Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.555292 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8d5779-64d7-431f-9202-eaa876df3de4" path="/var/lib/kubelet/pods/3f8d5779-64d7-431f-9202-eaa876df3de4/volumes" Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.557128 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b49ae1c-9cf5-4e9d-9142-502444638432" path="/var/lib/kubelet/pods/5b49ae1c-9cf5-4e9d-9142-502444638432/volumes" Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.558487 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a663199a-85bc-4931-bac4-6d060201ac38" path="/var/lib/kubelet/pods/a663199a-85bc-4931-bac4-6d060201ac38/volumes" Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.560107 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bf75c5-68b8-41d5-8815-6f6120f2271c" path="/var/lib/kubelet/pods/b2bf75c5-68b8-41d5-8815-6f6120f2271c/volumes" Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.562383 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b3a8e3-173f-4596-9712-ef4f7c324113" path="/var/lib/kubelet/pods/c8b3a8e3-173f-4596-9712-ef4f7c324113/volumes" Jan 31 15:26:55 crc kubenswrapper[4735]: I0131 15:26:55.562975 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54b2315-f6e7-4c5a-ab66-51d808de8aa1" path="/var/lib/kubelet/pods/d54b2315-f6e7-4c5a-ab66-51d808de8aa1/volumes" Jan 31 15:27:05 crc kubenswrapper[4735]: I0131 15:27:05.549406 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:27:05 crc kubenswrapper[4735]: E0131 15:27:05.550153 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:27:13 crc kubenswrapper[4735]: I0131 15:27:13.634670 4735 generic.go:334] "Generic (PLEG): container finished" podID="1898e10e-7cb4-453e-84f8-ee45e1b109a3" containerID="09329272c7799cc828db69c403aea47586837d4c13f91ffbf55b19da86d88a40" exitCode=0 Jan 31 15:27:13 crc kubenswrapper[4735]: I0131 15:27:13.634761 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" event={"ID":"1898e10e-7cb4-453e-84f8-ee45e1b109a3","Type":"ContainerDied","Data":"09329272c7799cc828db69c403aea47586837d4c13f91ffbf55b19da86d88a40"} Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.053808 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.127529 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56sbz\" (UniqueName: \"kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz\") pod \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.127661 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam\") pod \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.127750 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory\") pod \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\" (UID: \"1898e10e-7cb4-453e-84f8-ee45e1b109a3\") " Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.133554 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz" (OuterVolumeSpecName: "kube-api-access-56sbz") pod "1898e10e-7cb4-453e-84f8-ee45e1b109a3" (UID: "1898e10e-7cb4-453e-84f8-ee45e1b109a3"). InnerVolumeSpecName "kube-api-access-56sbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.152990 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory" (OuterVolumeSpecName: "inventory") pod "1898e10e-7cb4-453e-84f8-ee45e1b109a3" (UID: "1898e10e-7cb4-453e-84f8-ee45e1b109a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.168956 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1898e10e-7cb4-453e-84f8-ee45e1b109a3" (UID: "1898e10e-7cb4-453e-84f8-ee45e1b109a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.230170 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56sbz\" (UniqueName: \"kubernetes.io/projected/1898e10e-7cb4-453e-84f8-ee45e1b109a3-kube-api-access-56sbz\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.230197 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.230208 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1898e10e-7cb4-453e-84f8-ee45e1b109a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.659374 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" event={"ID":"1898e10e-7cb4-453e-84f8-ee45e1b109a3","Type":"ContainerDied","Data":"9fbc8655d4c51ade205d6911dd8b1ea2cecb5d1e6f59ecf07454f0dad50ad1a5"} Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.659455 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fbc8655d4c51ade205d6911dd8b1ea2cecb5d1e6f59ecf07454f0dad50ad1a5" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.659546 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.797492 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zlgxw"] Jan 31 15:27:15 crc kubenswrapper[4735]: E0131 15:27:15.797990 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1898e10e-7cb4-453e-84f8-ee45e1b109a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.798015 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1898e10e-7cb4-453e-84f8-ee45e1b109a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.798287 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1898e10e-7cb4-453e-84f8-ee45e1b109a3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.799082 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.802634 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.803396 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.804005 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.804267 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.809832 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zlgxw"] Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.842973 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.843214 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.843328 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqd24\" (UniqueName: \"kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.945832 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.945978 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqd24\" (UniqueName: \"kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.946089 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.949745 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.951498 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:15 crc kubenswrapper[4735]: I0131 15:27:15.963814 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqd24\" (UniqueName: \"kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24\") pod \"ssh-known-hosts-edpm-deployment-zlgxw\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:16 crc kubenswrapper[4735]: I0131 15:27:16.136873 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:16 crc kubenswrapper[4735]: I0131 15:27:16.750383 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zlgxw"] Jan 31 15:27:17 crc kubenswrapper[4735]: I0131 15:27:17.540820 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:27:17 crc kubenswrapper[4735]: E0131 15:27:17.541713 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:27:17 crc kubenswrapper[4735]: I0131 15:27:17.678940 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" event={"ID":"ddf071a4-728c-470f-829d-c905a4b60f9d","Type":"ContainerStarted","Data":"21caf7891a17cb404285a6a827e77d67bdb3e624f9d263933600928fbc26fe9d"} Jan 31 15:27:17 crc kubenswrapper[4735]: I0131 15:27:17.679463 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" event={"ID":"ddf071a4-728c-470f-829d-c905a4b60f9d","Type":"ContainerStarted","Data":"48fe9b7b9132ce74026a88fad36cb863196a1532b9f8dabc9c69846d5470f494"} Jan 31 15:27:17 crc kubenswrapper[4735]: I0131 15:27:17.701781 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" podStartSLOduration=2.2874216450000002 podStartE2EDuration="2.701757386s" podCreationTimestamp="2026-01-31 15:27:15 +0000 UTC" firstStartedPulling="2026-01-31 15:27:16.757644588 +0000 UTC m=+1722.530973630" lastFinishedPulling="2026-01-31 15:27:17.171980329 +0000 UTC m=+1722.945309371" observedRunningTime="2026-01-31 15:27:17.693002305 +0000 UTC m=+1723.466331417" watchObservedRunningTime="2026-01-31 15:27:17.701757386 +0000 UTC m=+1723.475086458" Jan 31 15:27:19 crc kubenswrapper[4735]: I0131 15:27:19.048534 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gddhd"] Jan 31 15:27:19 crc kubenswrapper[4735]: I0131 15:27:19.059150 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gddhd"] Jan 31 15:27:19 crc kubenswrapper[4735]: I0131 15:27:19.554366 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578e9195-5203-4b74-ae9b-90137faafc8b" path="/var/lib/kubelet/pods/578e9195-5203-4b74-ae9b-90137faafc8b/volumes" Jan 31 15:27:23 crc kubenswrapper[4735]: I0131 15:27:23.826097 4735 generic.go:334] "Generic (PLEG): container finished" podID="ddf071a4-728c-470f-829d-c905a4b60f9d" containerID="21caf7891a17cb404285a6a827e77d67bdb3e624f9d263933600928fbc26fe9d" exitCode=0 Jan 31 15:27:23 crc kubenswrapper[4735]: I0131 15:27:23.826210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" event={"ID":"ddf071a4-728c-470f-829d-c905a4b60f9d","Type":"ContainerDied","Data":"21caf7891a17cb404285a6a827e77d67bdb3e624f9d263933600928fbc26fe9d"} Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.317008 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.375889 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0\") pod \"ddf071a4-728c-470f-829d-c905a4b60f9d\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.377176 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqd24\" (UniqueName: \"kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24\") pod \"ddf071a4-728c-470f-829d-c905a4b60f9d\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.377411 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam\") pod \"ddf071a4-728c-470f-829d-c905a4b60f9d\" (UID: \"ddf071a4-728c-470f-829d-c905a4b60f9d\") " Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.394961 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24" (OuterVolumeSpecName: "kube-api-access-vqd24") pod "ddf071a4-728c-470f-829d-c905a4b60f9d" (UID: "ddf071a4-728c-470f-829d-c905a4b60f9d"). InnerVolumeSpecName "kube-api-access-vqd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.460483 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ddf071a4-728c-470f-829d-c905a4b60f9d" (UID: "ddf071a4-728c-470f-829d-c905a4b60f9d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.460640 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ddf071a4-728c-470f-829d-c905a4b60f9d" (UID: "ddf071a4-728c-470f-829d-c905a4b60f9d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.482562 4735 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.482599 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqd24\" (UniqueName: \"kubernetes.io/projected/ddf071a4-728c-470f-829d-c905a4b60f9d-kube-api-access-vqd24\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.482616 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf071a4-728c-470f-829d-c905a4b60f9d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.851625 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" event={"ID":"ddf071a4-728c-470f-829d-c905a4b60f9d","Type":"ContainerDied","Data":"48fe9b7b9132ce74026a88fad36cb863196a1532b9f8dabc9c69846d5470f494"} Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.852131 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48fe9b7b9132ce74026a88fad36cb863196a1532b9f8dabc9c69846d5470f494" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.851794 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zlgxw" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.950741 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2"] Jan 31 15:27:25 crc kubenswrapper[4735]: E0131 15:27:25.951598 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf071a4-728c-470f-829d-c905a4b60f9d" containerName="ssh-known-hosts-edpm-deployment" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.951740 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf071a4-728c-470f-829d-c905a4b60f9d" containerName="ssh-known-hosts-edpm-deployment" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.952219 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf071a4-728c-470f-829d-c905a4b60f9d" containerName="ssh-known-hosts-edpm-deployment" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.953418 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.961509 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2"] Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.962242 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.962579 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.962824 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.968504 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.992351 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.992663 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5krw\" (UniqueName: \"kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:25 crc kubenswrapper[4735]: I0131 15:27:25.992793 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.094516 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.094744 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.094928 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5krw\" (UniqueName: \"kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.097702 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.098952 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.112325 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5krw\" (UniqueName: \"kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8rvd2\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.272217 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:26 crc kubenswrapper[4735]: I0131 15:27:26.869687 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2"] Jan 31 15:27:27 crc kubenswrapper[4735]: I0131 15:27:27.874235 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" event={"ID":"c882b7b4-b823-41c7-98cd-862f19262e18","Type":"ContainerStarted","Data":"f0e489f1568d1b492c50bb60a345a6d3d674a80fbde6478738fb8509a08bdfda"} Jan 31 15:27:27 crc kubenswrapper[4735]: I0131 15:27:27.874530 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" event={"ID":"c882b7b4-b823-41c7-98cd-862f19262e18","Type":"ContainerStarted","Data":"8fa1a2affa7446a0bfdd7cc64b7038a9486586b8d49035b33c086c5083cfb463"} Jan 31 15:27:27 crc kubenswrapper[4735]: I0131 15:27:27.894881 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" podStartSLOduration=2.420863566 podStartE2EDuration="2.894864408s" podCreationTimestamp="2026-01-31 15:27:25 +0000 UTC" firstStartedPulling="2026-01-31 15:27:26.884327324 +0000 UTC m=+1732.657656356" lastFinishedPulling="2026-01-31 15:27:27.358328156 +0000 UTC m=+1733.131657198" observedRunningTime="2026-01-31 15:27:27.892920056 +0000 UTC m=+1733.666249108" watchObservedRunningTime="2026-01-31 15:27:27.894864408 +0000 UTC m=+1733.668193460" Jan 31 15:27:31 crc kubenswrapper[4735]: I0131 15:27:31.539894 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:27:31 crc kubenswrapper[4735]: E0131 15:27:31.540835 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:27:35 crc kubenswrapper[4735]: I0131 15:27:35.960195 4735 generic.go:334] "Generic (PLEG): container finished" podID="c882b7b4-b823-41c7-98cd-862f19262e18" containerID="f0e489f1568d1b492c50bb60a345a6d3d674a80fbde6478738fb8509a08bdfda" exitCode=0 Jan 31 15:27:35 crc kubenswrapper[4735]: I0131 15:27:35.961047 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" event={"ID":"c882b7b4-b823-41c7-98cd-862f19262e18","Type":"ContainerDied","Data":"f0e489f1568d1b492c50bb60a345a6d3d674a80fbde6478738fb8509a08bdfda"} Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.439355 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.553659 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5krw\" (UniqueName: \"kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw\") pod \"c882b7b4-b823-41c7-98cd-862f19262e18\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.553797 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory\") pod \"c882b7b4-b823-41c7-98cd-862f19262e18\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.553921 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam\") pod \"c882b7b4-b823-41c7-98cd-862f19262e18\" (UID: \"c882b7b4-b823-41c7-98cd-862f19262e18\") " Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.559494 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw" (OuterVolumeSpecName: "kube-api-access-d5krw") pod "c882b7b4-b823-41c7-98cd-862f19262e18" (UID: "c882b7b4-b823-41c7-98cd-862f19262e18"). InnerVolumeSpecName "kube-api-access-d5krw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.585461 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory" (OuterVolumeSpecName: "inventory") pod "c882b7b4-b823-41c7-98cd-862f19262e18" (UID: "c882b7b4-b823-41c7-98cd-862f19262e18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.602177 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c882b7b4-b823-41c7-98cd-862f19262e18" (UID: "c882b7b4-b823-41c7-98cd-862f19262e18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.664632 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.664669 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c882b7b4-b823-41c7-98cd-862f19262e18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.664685 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5krw\" (UniqueName: \"kubernetes.io/projected/c882b7b4-b823-41c7-98cd-862f19262e18-kube-api-access-d5krw\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.983586 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" event={"ID":"c882b7b4-b823-41c7-98cd-862f19262e18","Type":"ContainerDied","Data":"8fa1a2affa7446a0bfdd7cc64b7038a9486586b8d49035b33c086c5083cfb463"} Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.983631 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa1a2affa7446a0bfdd7cc64b7038a9486586b8d49035b33c086c5083cfb463" Jan 31 15:27:37 crc kubenswrapper[4735]: I0131 15:27:37.983994 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8rvd2" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.069947 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj"] Jan 31 15:27:38 crc kubenswrapper[4735]: E0131 15:27:38.070758 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c882b7b4-b823-41c7-98cd-862f19262e18" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.070781 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c882b7b4-b823-41c7-98cd-862f19262e18" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.072398 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c882b7b4-b823-41c7-98cd-862f19262e18" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.075057 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.077283 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.077310 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.077295 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.078993 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj"] Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.089568 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.173846 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m25m\" (UniqueName: \"kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.174399 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.174502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.276715 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m25m\" (UniqueName: \"kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.277125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.277274 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.289371 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.292788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m25m\" (UniqueName: \"kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.303789 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.402573 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.909129 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj"] Jan 31 15:27:38 crc kubenswrapper[4735]: I0131 15:27:38.993745 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" event={"ID":"0e45ff90-974c-42ba-986c-9303f5cde30f","Type":"ContainerStarted","Data":"320b94536316e4ef35aba9d2e0f126d3afc0eb67c846780c2e7657f4b6aec9d0"} Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.003938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" event={"ID":"0e45ff90-974c-42ba-986c-9303f5cde30f","Type":"ContainerStarted","Data":"2eacc5c1ed4cf9345af4afebab06ce61713ee5be08d6bb2ddbdc75cd9cd7b1b9"} Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.024441 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" podStartSLOduration=1.605574253 podStartE2EDuration="2.024400548s" podCreationTimestamp="2026-01-31 15:27:38 +0000 UTC" firstStartedPulling="2026-01-31 15:27:38.912044646 +0000 UTC m=+1744.685373688" lastFinishedPulling="2026-01-31 15:27:39.330870931 +0000 UTC m=+1745.104199983" observedRunningTime="2026-01-31 15:27:40.022482736 +0000 UTC m=+1745.795811778" watchObservedRunningTime="2026-01-31 15:27:40.024400548 +0000 UTC m=+1745.797729590" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.451280 4735 scope.go:117] "RemoveContainer" containerID="7053a470168586a1d76f370c5ce6a1d733a91484a850eb2452efddf927a25358" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.509730 4735 scope.go:117] "RemoveContainer" containerID="fd3ae525f3b50033fe4b071ae77295b9f5f80da46513d9ce08db8542309652a8" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.532491 4735 scope.go:117] "RemoveContainer" containerID="158bbd6a9ec5510ff12ebc71df1ec14cf8e726de4aefcb76c3773e47f76ea832" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.575156 4735 scope.go:117] "RemoveContainer" containerID="c9d435fe0af85102cbe325d3dd26f08f5cbae1a04ed65f183d450d140b0df0b0" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.619505 4735 scope.go:117] "RemoveContainer" containerID="eadc9a029ace25a7b082c1c5e3179d31bdba1b9f4ad148adfabe34457fa606e9" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.667014 4735 scope.go:117] "RemoveContainer" containerID="a8fbbb84d4cf75b4d760b529f62865ce263641943d6418088c1d70ec81746a83" Jan 31 15:27:40 crc kubenswrapper[4735]: I0131 15:27:40.700654 4735 scope.go:117] "RemoveContainer" containerID="9febf6313c5438c533888751d30df54fb9e7a58f4d797198ca5a69be53f74ac7" Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.056968 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6p66x"] Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.073312 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fxdfs"] Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.100937 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fxdfs"] Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.115812 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6p66x"] Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.557304 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38bef073-0ba5-43e1-8532-cb868269bfc1" path="/var/lib/kubelet/pods/38bef073-0ba5-43e1-8532-cb868269bfc1/volumes" Jan 31 15:27:41 crc kubenswrapper[4735]: I0131 15:27:41.557887 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b20a23a-80f2-4a93-81e2-062fec775d79" path="/var/lib/kubelet/pods/8b20a23a-80f2-4a93-81e2-062fec775d79/volumes" Jan 31 15:27:45 crc kubenswrapper[4735]: I0131 15:27:45.552963 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:27:45 crc kubenswrapper[4735]: E0131 15:27:45.554348 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:27:49 crc kubenswrapper[4735]: I0131 15:27:49.095360 4735 generic.go:334] "Generic (PLEG): container finished" podID="0e45ff90-974c-42ba-986c-9303f5cde30f" containerID="2eacc5c1ed4cf9345af4afebab06ce61713ee5be08d6bb2ddbdc75cd9cd7b1b9" exitCode=0 Jan 31 15:27:49 crc kubenswrapper[4735]: I0131 15:27:49.095631 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" event={"ID":"0e45ff90-974c-42ba-986c-9303f5cde30f","Type":"ContainerDied","Data":"2eacc5c1ed4cf9345af4afebab06ce61713ee5be08d6bb2ddbdc75cd9cd7b1b9"} Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.542823 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.629782 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory\") pod \"0e45ff90-974c-42ba-986c-9303f5cde30f\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.629859 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam\") pod \"0e45ff90-974c-42ba-986c-9303f5cde30f\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.629924 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m25m\" (UniqueName: \"kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m\") pod \"0e45ff90-974c-42ba-986c-9303f5cde30f\" (UID: \"0e45ff90-974c-42ba-986c-9303f5cde30f\") " Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.638367 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m" (OuterVolumeSpecName: "kube-api-access-2m25m") pod "0e45ff90-974c-42ba-986c-9303f5cde30f" (UID: "0e45ff90-974c-42ba-986c-9303f5cde30f"). InnerVolumeSpecName "kube-api-access-2m25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.673576 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory" (OuterVolumeSpecName: "inventory") pod "0e45ff90-974c-42ba-986c-9303f5cde30f" (UID: "0e45ff90-974c-42ba-986c-9303f5cde30f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.676435 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e45ff90-974c-42ba-986c-9303f5cde30f" (UID: "0e45ff90-974c-42ba-986c-9303f5cde30f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.734306 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.734581 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e45ff90-974c-42ba-986c-9303f5cde30f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:50 crc kubenswrapper[4735]: I0131 15:27:50.734597 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m25m\" (UniqueName: \"kubernetes.io/projected/0e45ff90-974c-42ba-986c-9303f5cde30f-kube-api-access-2m25m\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.121036 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" event={"ID":"0e45ff90-974c-42ba-986c-9303f5cde30f","Type":"ContainerDied","Data":"320b94536316e4ef35aba9d2e0f126d3afc0eb67c846780c2e7657f4b6aec9d0"} Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.121098 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320b94536316e4ef35aba9d2e0f126d3afc0eb67c846780c2e7657f4b6aec9d0" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.121139 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.256929 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf"] Jan 31 15:27:51 crc kubenswrapper[4735]: E0131 15:27:51.257401 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e45ff90-974c-42ba-986c-9303f5cde30f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.257442 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e45ff90-974c-42ba-986c-9303f5cde30f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.257678 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e45ff90-974c-42ba-986c-9303f5cde30f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.258485 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.261075 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.261087 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.261524 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.261550 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.261727 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.263333 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.266262 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.266590 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.266884 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf"] Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.347672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348113 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrm7\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348401 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348599 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348659 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348742 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348894 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.348982 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.349026 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.349075 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.349101 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.349169 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450040 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450080 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450120 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450206 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450239 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450267 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450339 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450368 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450588 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.450631 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrm7\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.451020 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.456202 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.456509 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.456545 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.456691 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.457344 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.457939 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.458731 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.459387 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.459463 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.459741 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.460406 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.463026 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.468480 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.478577 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrm7\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:51 crc kubenswrapper[4735]: I0131 15:27:51.576032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:27:52 crc kubenswrapper[4735]: I0131 15:27:52.170817 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf"] Jan 31 15:27:52 crc kubenswrapper[4735]: W0131 15:27:52.176573 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cfb222c_2a44_4521_af34_3d352b0cfdea.slice/crio-981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1 WatchSource:0}: Error finding container 981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1: Status 404 returned error can't find the container with id 981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1 Jan 31 15:27:53 crc kubenswrapper[4735]: I0131 15:27:53.141196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" event={"ID":"2cfb222c-2a44-4521-af34-3d352b0cfdea","Type":"ContainerStarted","Data":"328d84915fd2d81abb2fc89ddb8bcad80f9140bceb9c114353cd38d313cdb20f"} Jan 31 15:27:53 crc kubenswrapper[4735]: I0131 15:27:53.141546 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" event={"ID":"2cfb222c-2a44-4521-af34-3d352b0cfdea","Type":"ContainerStarted","Data":"981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1"} Jan 31 15:27:53 crc kubenswrapper[4735]: I0131 15:27:53.159802 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" podStartSLOduration=1.763368597 podStartE2EDuration="2.159781451s" podCreationTimestamp="2026-01-31 15:27:51 +0000 UTC" firstStartedPulling="2026-01-31 15:27:52.179788003 +0000 UTC m=+1757.953117075" lastFinishedPulling="2026-01-31 15:27:52.576200887 +0000 UTC m=+1758.349529929" observedRunningTime="2026-01-31 15:27:53.159291266 +0000 UTC m=+1758.932620318" watchObservedRunningTime="2026-01-31 15:27:53.159781451 +0000 UTC m=+1758.933110493" Jan 31 15:27:58 crc kubenswrapper[4735]: I0131 15:27:58.540745 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:27:58 crc kubenswrapper[4735]: E0131 15:27:58.541545 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:28:12 crc kubenswrapper[4735]: I0131 15:28:12.540695 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:28:12 crc kubenswrapper[4735]: E0131 15:28:12.541688 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:28:27 crc kubenswrapper[4735]: I0131 15:28:27.042284 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lddf6"] Jan 31 15:28:27 crc kubenswrapper[4735]: I0131 15:28:27.047934 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lddf6"] Jan 31 15:28:27 crc kubenswrapper[4735]: I0131 15:28:27.542351 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:28:27 crc kubenswrapper[4735]: E0131 15:28:27.543107 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:28:27 crc kubenswrapper[4735]: I0131 15:28:27.571808 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014134de-eb91-414a-a8a8-6ffe3cae5e72" path="/var/lib/kubelet/pods/014134de-eb91-414a-a8a8-6ffe3cae5e72/volumes" Jan 31 15:28:28 crc kubenswrapper[4735]: I0131 15:28:28.466578 4735 generic.go:334] "Generic (PLEG): container finished" podID="2cfb222c-2a44-4521-af34-3d352b0cfdea" containerID="328d84915fd2d81abb2fc89ddb8bcad80f9140bceb9c114353cd38d313cdb20f" exitCode=0 Jan 31 15:28:28 crc kubenswrapper[4735]: I0131 15:28:28.466643 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" event={"ID":"2cfb222c-2a44-4521-af34-3d352b0cfdea","Type":"ContainerDied","Data":"328d84915fd2d81abb2fc89ddb8bcad80f9140bceb9c114353cd38d313cdb20f"} Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.024830 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145507 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145583 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145632 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145691 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145727 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145777 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrm7\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145819 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145862 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145899 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.145991 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.146069 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.146110 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.146145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.146189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2cfb222c-2a44-4521-af34-3d352b0cfdea\" (UID: \"2cfb222c-2a44-4521-af34-3d352b0cfdea\") " Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.152246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.153068 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.153496 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.154701 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.154723 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.155473 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.156059 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.156364 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.156395 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.156792 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.157110 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.159247 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7" (OuterVolumeSpecName: "kube-api-access-nhrm7") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "kube-api-access-nhrm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.178914 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.183938 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory" (OuterVolumeSpecName: "inventory") pod "2cfb222c-2a44-4521-af34-3d352b0cfdea" (UID: "2cfb222c-2a44-4521-af34-3d352b0cfdea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.248873 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.249363 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.249571 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrm7\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-kube-api-access-nhrm7\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.249649 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.249706 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.249783 4735 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250265 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250393 4735 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250522 4735 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250624 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250743 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250837 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.250922 4735 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb222c-2a44-4521-af34-3d352b0cfdea-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.251004 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cfb222c-2a44-4521-af34-3d352b0cfdea-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.492071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" event={"ID":"2cfb222c-2a44-4521-af34-3d352b0cfdea","Type":"ContainerDied","Data":"981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1"} Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.492142 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="981092b4eddd1e006a8c9a5acada27117b0809a0dc8521e1a645a2985d7c45f1" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.492159 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.636948 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl"] Jan 31 15:28:30 crc kubenswrapper[4735]: E0131 15:28:30.638056 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfb222c-2a44-4521-af34-3d352b0cfdea" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.638245 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfb222c-2a44-4521-af34-3d352b0cfdea" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.638874 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfb222c-2a44-4521-af34-3d352b0cfdea" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.640139 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.650135 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl"] Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.681542 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.681542 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.681790 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.681933 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.681936 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.682377 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.682414 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.682518 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7sk\" (UniqueName: \"kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.682603 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.682666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.783752 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.783802 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.783856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng7sk\" (UniqueName: \"kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.783916 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.783960 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.784869 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.794512 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.796128 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.797160 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.804077 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng7sk\" (UniqueName: \"kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rq2xl\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:30 crc kubenswrapper[4735]: I0131 15:28:30.997322 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:28:31 crc kubenswrapper[4735]: I0131 15:28:31.529076 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl"] Jan 31 15:28:32 crc kubenswrapper[4735]: I0131 15:28:32.512435 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" event={"ID":"67db3c53-552c-458d-b333-09ad7b0f0447","Type":"ContainerStarted","Data":"d9d043a23f9d9b53c2d4a965256c177852585cf50493d12b7c5f2d2c084a2b22"} Jan 31 15:28:32 crc kubenswrapper[4735]: I0131 15:28:32.512863 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" event={"ID":"67db3c53-552c-458d-b333-09ad7b0f0447","Type":"ContainerStarted","Data":"6ccde319c0d331f7503aaf97348986d3e771661eaefb02e16e1a468abcf3e54a"} Jan 31 15:28:32 crc kubenswrapper[4735]: I0131 15:28:32.550723 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" podStartSLOduration=2.093265172 podStartE2EDuration="2.55069224s" podCreationTimestamp="2026-01-31 15:28:30 +0000 UTC" firstStartedPulling="2026-01-31 15:28:31.529311403 +0000 UTC m=+1797.302640455" lastFinishedPulling="2026-01-31 15:28:31.986738441 +0000 UTC m=+1797.760067523" observedRunningTime="2026-01-31 15:28:32.535572835 +0000 UTC m=+1798.308901937" watchObservedRunningTime="2026-01-31 15:28:32.55069224 +0000 UTC m=+1798.324021312" Jan 31 15:28:40 crc kubenswrapper[4735]: I0131 15:28:40.846628 4735 scope.go:117] "RemoveContainer" containerID="961e2e391877e8b195fc8a0c8d463824ce20500e9ec683ddf6d7b7f308c17a87" Jan 31 15:28:40 crc kubenswrapper[4735]: I0131 15:28:40.904950 4735 scope.go:117] "RemoveContainer" containerID="4547cb9cc23b5d8fba5621625bfb2f92b02a720a1bc302289e737536e9deb00e" Jan 31 15:28:40 crc kubenswrapper[4735]: I0131 15:28:40.954265 4735 scope.go:117] "RemoveContainer" containerID="60d98f5271f1b92d7e1f95fa6aa1768ec8189fd62624e2a701a6dd18cb372368" Jan 31 15:28:42 crc kubenswrapper[4735]: I0131 15:28:42.541656 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:28:42 crc kubenswrapper[4735]: E0131 15:28:42.542697 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.809249 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.812313 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.857065 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.878168 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.878235 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4fc\" (UniqueName: \"kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.878538 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.980125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.980177 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4fc\" (UniqueName: \"kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.980264 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.980865 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:49 crc kubenswrapper[4735]: I0131 15:28:49.981364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.002030 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4fc\" (UniqueName: \"kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc\") pod \"community-operators-kmqdt\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.004804 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.006739 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.016555 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.082162 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.082295 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrxq\" (UniqueName: \"kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.082328 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.142128 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.184338 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.184465 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrxq\" (UniqueName: \"kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.184494 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.185047 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.185269 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.215239 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrxq\" (UniqueName: \"kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq\") pod \"redhat-marketplace-h948m\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.373938 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:28:50 crc kubenswrapper[4735]: W0131 15:28:50.713707 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0367752_ba32_49db_b522_aa19fae7f7ac.slice/crio-69c05f27a40f752637ef4c195a9440ed5fb8dade2b34ee564043f5adf7411611 WatchSource:0}: Error finding container 69c05f27a40f752637ef4c195a9440ed5fb8dade2b34ee564043f5adf7411611: Status 404 returned error can't find the container with id 69c05f27a40f752637ef4c195a9440ed5fb8dade2b34ee564043f5adf7411611 Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.713744 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:28:50 crc kubenswrapper[4735]: I0131 15:28:50.869959 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:28:50 crc kubenswrapper[4735]: W0131 15:28:50.877630 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af49c32_a5f8_4f9d_9db7_1d45436498ea.slice/crio-8ae235f9c7512573c393bd7095d2bad77d74680168944399c6806dbd4d1e4f97 WatchSource:0}: Error finding container 8ae235f9c7512573c393bd7095d2bad77d74680168944399c6806dbd4d1e4f97: Status 404 returned error can't find the container with id 8ae235f9c7512573c393bd7095d2bad77d74680168944399c6806dbd4d1e4f97 Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.729967 4735 generic.go:334] "Generic (PLEG): container finished" podID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerID="cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233" exitCode=0 Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.730234 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerDied","Data":"cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233"} Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.730391 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerStarted","Data":"8ae235f9c7512573c393bd7095d2bad77d74680168944399c6806dbd4d1e4f97"} Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.734378 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.734805 4735 generic.go:334] "Generic (PLEG): container finished" podID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerID="d8c1f5724efef4b381ebfd5547782a12535c5950e2962277fa8fa69cb6f0c39d" exitCode=0 Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.734845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerDied","Data":"d8c1f5724efef4b381ebfd5547782a12535c5950e2962277fa8fa69cb6f0c39d"} Jan 31 15:28:51 crc kubenswrapper[4735]: I0131 15:28:51.734890 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerStarted","Data":"69c05f27a40f752637ef4c195a9440ed5fb8dade2b34ee564043f5adf7411611"} Jan 31 15:28:53 crc kubenswrapper[4735]: I0131 15:28:53.753315 4735 generic.go:334] "Generic (PLEG): container finished" podID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerID="af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3" exitCode=0 Jan 31 15:28:53 crc kubenswrapper[4735]: I0131 15:28:53.753461 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerDied","Data":"af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3"} Jan 31 15:28:53 crc kubenswrapper[4735]: I0131 15:28:53.759902 4735 generic.go:334] "Generic (PLEG): container finished" podID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerID="4e0f80d3a812c3692c4be50cf476c5d009a788f3893d4db811b03898eddfa3f4" exitCode=0 Jan 31 15:28:53 crc kubenswrapper[4735]: I0131 15:28:53.759934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerDied","Data":"4e0f80d3a812c3692c4be50cf476c5d009a788f3893d4db811b03898eddfa3f4"} Jan 31 15:28:54 crc kubenswrapper[4735]: I0131 15:28:54.768558 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerStarted","Data":"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c"} Jan 31 15:28:54 crc kubenswrapper[4735]: I0131 15:28:54.773207 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerStarted","Data":"233a7af144fcafe0c4811308a7891f2d4918e3fc66b15a1634df558b6701c672"} Jan 31 15:28:54 crc kubenswrapper[4735]: I0131 15:28:54.797519 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h948m" podStartSLOduration=3.287233795 podStartE2EDuration="5.797499475s" podCreationTimestamp="2026-01-31 15:28:49 +0000 UTC" firstStartedPulling="2026-01-31 15:28:51.733775719 +0000 UTC m=+1817.507104801" lastFinishedPulling="2026-01-31 15:28:54.244041429 +0000 UTC m=+1820.017370481" observedRunningTime="2026-01-31 15:28:54.789445754 +0000 UTC m=+1820.562774796" watchObservedRunningTime="2026-01-31 15:28:54.797499475 +0000 UTC m=+1820.570828517" Jan 31 15:28:54 crc kubenswrapper[4735]: I0131 15:28:54.809670 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kmqdt" podStartSLOduration=3.3839102260000002 podStartE2EDuration="5.809646581s" podCreationTimestamp="2026-01-31 15:28:49 +0000 UTC" firstStartedPulling="2026-01-31 15:28:51.737574908 +0000 UTC m=+1817.510903990" lastFinishedPulling="2026-01-31 15:28:54.163311273 +0000 UTC m=+1819.936640345" observedRunningTime="2026-01-31 15:28:54.806615365 +0000 UTC m=+1820.579944407" watchObservedRunningTime="2026-01-31 15:28:54.809646581 +0000 UTC m=+1820.582975623" Jan 31 15:28:57 crc kubenswrapper[4735]: I0131 15:28:57.540235 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:28:57 crc kubenswrapper[4735]: E0131 15:28:57.540815 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.142485 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.142856 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.243364 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.374674 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.374747 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.425037 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.885222 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:00 crc kubenswrapper[4735]: I0131 15:29:00.897921 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:01 crc kubenswrapper[4735]: I0131 15:29:01.897727 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:29:02 crc kubenswrapper[4735]: I0131 15:29:02.845372 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h948m" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="registry-server" containerID="cri-o://4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c" gracePeriod=2 Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.293657 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.313363 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.453028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities\") pod \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.453100 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrxq\" (UniqueName: \"kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq\") pod \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.453184 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content\") pod \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\" (UID: \"5af49c32-a5f8-4f9d-9db7-1d45436498ea\") " Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.453950 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities" (OuterVolumeSpecName: "utilities") pod "5af49c32-a5f8-4f9d-9db7-1d45436498ea" (UID: "5af49c32-a5f8-4f9d-9db7-1d45436498ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.460397 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq" (OuterVolumeSpecName: "kube-api-access-jbrxq") pod "5af49c32-a5f8-4f9d-9db7-1d45436498ea" (UID: "5af49c32-a5f8-4f9d-9db7-1d45436498ea"). InnerVolumeSpecName "kube-api-access-jbrxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.486680 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5af49c32-a5f8-4f9d-9db7-1d45436498ea" (UID: "5af49c32-a5f8-4f9d-9db7-1d45436498ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.555012 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.555039 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrxq\" (UniqueName: \"kubernetes.io/projected/5af49c32-a5f8-4f9d-9db7-1d45436498ea-kube-api-access-jbrxq\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.555050 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5af49c32-a5f8-4f9d-9db7-1d45436498ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.856821 4735 generic.go:334] "Generic (PLEG): container finished" podID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerID="4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c" exitCode=0 Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.857402 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kmqdt" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="registry-server" containerID="cri-o://233a7af144fcafe0c4811308a7891f2d4918e3fc66b15a1634df558b6701c672" gracePeriod=2 Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.858054 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h948m" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.858157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerDied","Data":"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c"} Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.858281 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h948m" event={"ID":"5af49c32-a5f8-4f9d-9db7-1d45436498ea","Type":"ContainerDied","Data":"8ae235f9c7512573c393bd7095d2bad77d74680168944399c6806dbd4d1e4f97"} Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.858364 4735 scope.go:117] "RemoveContainer" containerID="4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.887094 4735 scope.go:117] "RemoveContainer" containerID="af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.898809 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.912986 4735 scope.go:117] "RemoveContainer" containerID="cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.913831 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h948m"] Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.998254 4735 scope.go:117] "RemoveContainer" containerID="4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c" Jan 31 15:29:03 crc kubenswrapper[4735]: E0131 15:29:03.998847 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c\": container with ID starting with 4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c not found: ID does not exist" containerID="4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.998963 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c"} err="failed to get container status \"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c\": rpc error: code = NotFound desc = could not find container \"4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c\": container with ID starting with 4171f68dc00a2c686efd54d12e894b39e9c31b52eda154175c9f4e8ecae72e7c not found: ID does not exist" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.998997 4735 scope.go:117] "RemoveContainer" containerID="af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3" Jan 31 15:29:03 crc kubenswrapper[4735]: E0131 15:29:03.999488 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3\": container with ID starting with af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3 not found: ID does not exist" containerID="af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.999542 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3"} err="failed to get container status \"af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3\": rpc error: code = NotFound desc = could not find container \"af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3\": container with ID starting with af824b79039f4a7d481fd3b8c3b83b540c09fd3c6d0f3059323fb42f5d7ca3c3 not found: ID does not exist" Jan 31 15:29:03 crc kubenswrapper[4735]: I0131 15:29:03.999574 4735 scope.go:117] "RemoveContainer" containerID="cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233" Jan 31 15:29:03 crc kubenswrapper[4735]: E0131 15:29:03.999886 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233\": container with ID starting with cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233 not found: ID does not exist" containerID="cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233" Jan 31 15:29:04 crc kubenswrapper[4735]: I0131 15:29:03.999916 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233"} err="failed to get container status \"cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233\": rpc error: code = NotFound desc = could not find container \"cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233\": container with ID starting with cbae0090ad657237871c6a45d1c36e17f392b994a121e5a3f5dd2d3ca277f233 not found: ID does not exist" Jan 31 15:29:04 crc kubenswrapper[4735]: I0131 15:29:04.870555 4735 generic.go:334] "Generic (PLEG): container finished" podID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerID="233a7af144fcafe0c4811308a7891f2d4918e3fc66b15a1634df558b6701c672" exitCode=0 Jan 31 15:29:04 crc kubenswrapper[4735]: I0131 15:29:04.870639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerDied","Data":"233a7af144fcafe0c4811308a7891f2d4918e3fc66b15a1634df558b6701c672"} Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.439749 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.551836 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" path="/var/lib/kubelet/pods/5af49c32-a5f8-4f9d-9db7-1d45436498ea/volumes" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.600680 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4fc\" (UniqueName: \"kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc\") pod \"a0367752-ba32-49db-b522-aa19fae7f7ac\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.600815 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities\") pod \"a0367752-ba32-49db-b522-aa19fae7f7ac\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.600868 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content\") pod \"a0367752-ba32-49db-b522-aa19fae7f7ac\" (UID: \"a0367752-ba32-49db-b522-aa19fae7f7ac\") " Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.602117 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities" (OuterVolumeSpecName: "utilities") pod "a0367752-ba32-49db-b522-aa19fae7f7ac" (UID: "a0367752-ba32-49db-b522-aa19fae7f7ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.609088 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc" (OuterVolumeSpecName: "kube-api-access-dx4fc") pod "a0367752-ba32-49db-b522-aa19fae7f7ac" (UID: "a0367752-ba32-49db-b522-aa19fae7f7ac"). InnerVolumeSpecName "kube-api-access-dx4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.670515 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0367752-ba32-49db-b522-aa19fae7f7ac" (UID: "a0367752-ba32-49db-b522-aa19fae7f7ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.703732 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4fc\" (UniqueName: \"kubernetes.io/projected/a0367752-ba32-49db-b522-aa19fae7f7ac-kube-api-access-dx4fc\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.703773 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.703785 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0367752-ba32-49db-b522-aa19fae7f7ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.888615 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmqdt" event={"ID":"a0367752-ba32-49db-b522-aa19fae7f7ac","Type":"ContainerDied","Data":"69c05f27a40f752637ef4c195a9440ed5fb8dade2b34ee564043f5adf7411611"} Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.888722 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmqdt" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.888733 4735 scope.go:117] "RemoveContainer" containerID="233a7af144fcafe0c4811308a7891f2d4918e3fc66b15a1634df558b6701c672" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.932798 4735 scope.go:117] "RemoveContainer" containerID="4e0f80d3a812c3692c4be50cf476c5d009a788f3893d4db811b03898eddfa3f4" Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.942933 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.955721 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kmqdt"] Jan 31 15:29:05 crc kubenswrapper[4735]: I0131 15:29:05.964784 4735 scope.go:117] "RemoveContainer" containerID="d8c1f5724efef4b381ebfd5547782a12535c5950e2962277fa8fa69cb6f0c39d" Jan 31 15:29:07 crc kubenswrapper[4735]: I0131 15:29:07.553033 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" path="/var/lib/kubelet/pods/a0367752-ba32-49db-b522-aa19fae7f7ac/volumes" Jan 31 15:29:10 crc kubenswrapper[4735]: I0131 15:29:10.539770 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:29:10 crc kubenswrapper[4735]: I0131 15:29:10.939132 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f"} Jan 31 15:29:31 crc kubenswrapper[4735]: I0131 15:29:31.144795 4735 generic.go:334] "Generic (PLEG): container finished" podID="67db3c53-552c-458d-b333-09ad7b0f0447" containerID="d9d043a23f9d9b53c2d4a965256c177852585cf50493d12b7c5f2d2c084a2b22" exitCode=0 Jan 31 15:29:31 crc kubenswrapper[4735]: I0131 15:29:31.144887 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" event={"ID":"67db3c53-552c-458d-b333-09ad7b0f0447","Type":"ContainerDied","Data":"d9d043a23f9d9b53c2d4a965256c177852585cf50493d12b7c5f2d2c084a2b22"} Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.600050 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.662455 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam\") pod \"67db3c53-552c-458d-b333-09ad7b0f0447\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.662605 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle\") pod \"67db3c53-552c-458d-b333-09ad7b0f0447\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.662639 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0\") pod \"67db3c53-552c-458d-b333-09ad7b0f0447\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.662687 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng7sk\" (UniqueName: \"kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk\") pod \"67db3c53-552c-458d-b333-09ad7b0f0447\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.662731 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory\") pod \"67db3c53-552c-458d-b333-09ad7b0f0447\" (UID: \"67db3c53-552c-458d-b333-09ad7b0f0447\") " Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.668704 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk" (OuterVolumeSpecName: "kube-api-access-ng7sk") pod "67db3c53-552c-458d-b333-09ad7b0f0447" (UID: "67db3c53-552c-458d-b333-09ad7b0f0447"). InnerVolumeSpecName "kube-api-access-ng7sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.668769 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "67db3c53-552c-458d-b333-09ad7b0f0447" (UID: "67db3c53-552c-458d-b333-09ad7b0f0447"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.692668 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory" (OuterVolumeSpecName: "inventory") pod "67db3c53-552c-458d-b333-09ad7b0f0447" (UID: "67db3c53-552c-458d-b333-09ad7b0f0447"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.694710 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "67db3c53-552c-458d-b333-09ad7b0f0447" (UID: "67db3c53-552c-458d-b333-09ad7b0f0447"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.698970 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67db3c53-552c-458d-b333-09ad7b0f0447" (UID: "67db3c53-552c-458d-b333-09ad7b0f0447"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.766035 4735 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/67db3c53-552c-458d-b333-09ad7b0f0447-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.766073 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng7sk\" (UniqueName: \"kubernetes.io/projected/67db3c53-552c-458d-b333-09ad7b0f0447-kube-api-access-ng7sk\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.766083 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.766105 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4735]: I0131 15:29:32.766116 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67db3c53-552c-458d-b333-09ad7b0f0447-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.166318 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" event={"ID":"67db3c53-552c-458d-b333-09ad7b0f0447","Type":"ContainerDied","Data":"6ccde319c0d331f7503aaf97348986d3e771661eaefb02e16e1a468abcf3e54a"} Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.166364 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccde319c0d331f7503aaf97348986d3e771661eaefb02e16e1a468abcf3e54a" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.166440 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rq2xl" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.266532 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg"] Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.266896 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="extract-content" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.266916 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="extract-content" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.266934 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="extract-content" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.266943 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="extract-content" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.266964 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="extract-utilities" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.266973 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="extract-utilities" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.266981 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.266987 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.266999 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="extract-utilities" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267005 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="extract-utilities" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.267032 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67db3c53-552c-458d-b333-09ad7b0f0447" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267041 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="67db3c53-552c-458d-b333-09ad7b0f0447" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 15:29:33 crc kubenswrapper[4735]: E0131 15:29:33.267052 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267060 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267286 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0367752-ba32-49db-b522-aa19fae7f7ac" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267301 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af49c32-a5f8-4f9d-9db7-1d45436498ea" containerName="registry-server" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267316 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="67db3c53-552c-458d-b333-09ad7b0f0447" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.267960 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.277487 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg"] Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317012 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317109 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317171 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317250 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317045 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.317437 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.381275 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.381593 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.381708 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhkc\" (UniqueName: \"kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.381871 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.381927 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.382183 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484412 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484515 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484546 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhkc\" (UniqueName: \"kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484577 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.484670 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.490549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.490888 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.490943 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.494031 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.503192 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.503563 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhkc\" (UniqueName: \"kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:33 crc kubenswrapper[4735]: I0131 15:29:33.641016 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:29:34 crc kubenswrapper[4735]: I0131 15:29:34.227770 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg"] Jan 31 15:29:35 crc kubenswrapper[4735]: I0131 15:29:35.189655 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" event={"ID":"bdcd1840-f7b6-41b8-bad9-43e441f1cad9","Type":"ContainerStarted","Data":"9ed452621a5c0cd0284be5a6548d510c3fb765e58c43bc56458b75f0e8f07528"} Jan 31 15:29:35 crc kubenswrapper[4735]: I0131 15:29:35.190208 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" event={"ID":"bdcd1840-f7b6-41b8-bad9-43e441f1cad9","Type":"ContainerStarted","Data":"7fb0f5216c5221f70ecceb984d6113c8b4310f53f4bee2456ff33e8a295adbbc"} Jan 31 15:29:35 crc kubenswrapper[4735]: I0131 15:29:35.217862 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" podStartSLOduration=1.711912034 podStartE2EDuration="2.217845152s" podCreationTimestamp="2026-01-31 15:29:33 +0000 UTC" firstStartedPulling="2026-01-31 15:29:34.233863782 +0000 UTC m=+1860.007192834" lastFinishedPulling="2026-01-31 15:29:34.73979691 +0000 UTC m=+1860.513125952" observedRunningTime="2026-01-31 15:29:35.215652179 +0000 UTC m=+1860.988981221" watchObservedRunningTime="2026-01-31 15:29:35.217845152 +0000 UTC m=+1860.991174194" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.165910 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8"] Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.169816 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.173086 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.173130 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.191677 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8"] Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.248402 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjnb\" (UniqueName: \"kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.249063 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.249208 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.352248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.353076 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.353634 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjnb\" (UniqueName: \"kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.354477 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.359082 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.382374 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjnb\" (UniqueName: \"kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb\") pod \"collect-profiles-29497890-gkbv8\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:00 crc kubenswrapper[4735]: I0131 15:30:00.509301 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:01 crc kubenswrapper[4735]: I0131 15:30:01.013587 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8"] Jan 31 15:30:01 crc kubenswrapper[4735]: I0131 15:30:01.440916 4735 generic.go:334] "Generic (PLEG): container finished" podID="58e68421-129a-4efd-a7e6-3472b3a27ffd" containerID="0e0240363dc13594c69f4edec25e1382b28e0064f991a748620e5023f2120fa7" exitCode=0 Jan 31 15:30:01 crc kubenswrapper[4735]: I0131 15:30:01.440955 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" event={"ID":"58e68421-129a-4efd-a7e6-3472b3a27ffd","Type":"ContainerDied","Data":"0e0240363dc13594c69f4edec25e1382b28e0064f991a748620e5023f2120fa7"} Jan 31 15:30:01 crc kubenswrapper[4735]: I0131 15:30:01.440979 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" event={"ID":"58e68421-129a-4efd-a7e6-3472b3a27ffd","Type":"ContainerStarted","Data":"75f6cb4949359ce17578f6de82bc084e6cc0a9e732c64df9c1737a318cdb7f58"} Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.785687 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.905548 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnjnb\" (UniqueName: \"kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb\") pod \"58e68421-129a-4efd-a7e6-3472b3a27ffd\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.905668 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume\") pod \"58e68421-129a-4efd-a7e6-3472b3a27ffd\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.905771 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume\") pod \"58e68421-129a-4efd-a7e6-3472b3a27ffd\" (UID: \"58e68421-129a-4efd-a7e6-3472b3a27ffd\") " Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.906710 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume" (OuterVolumeSpecName: "config-volume") pod "58e68421-129a-4efd-a7e6-3472b3a27ffd" (UID: "58e68421-129a-4efd-a7e6-3472b3a27ffd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.912636 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58e68421-129a-4efd-a7e6-3472b3a27ffd" (UID: "58e68421-129a-4efd-a7e6-3472b3a27ffd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:02 crc kubenswrapper[4735]: I0131 15:30:02.912653 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb" (OuterVolumeSpecName: "kube-api-access-xnjnb") pod "58e68421-129a-4efd-a7e6-3472b3a27ffd" (UID: "58e68421-129a-4efd-a7e6-3472b3a27ffd"). InnerVolumeSpecName "kube-api-access-xnjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.007371 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnjnb\" (UniqueName: \"kubernetes.io/projected/58e68421-129a-4efd-a7e6-3472b3a27ffd-kube-api-access-xnjnb\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.007402 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e68421-129a-4efd-a7e6-3472b3a27ffd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.007411 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58e68421-129a-4efd-a7e6-3472b3a27ffd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.461728 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" event={"ID":"58e68421-129a-4efd-a7e6-3472b3a27ffd","Type":"ContainerDied","Data":"75f6cb4949359ce17578f6de82bc084e6cc0a9e732c64df9c1737a318cdb7f58"} Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.461975 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f6cb4949359ce17578f6de82bc084e6cc0a9e732c64df9c1737a318cdb7f58" Jan 31 15:30:03 crc kubenswrapper[4735]: I0131 15:30:03.461865 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-gkbv8" Jan 31 15:30:21 crc kubenswrapper[4735]: I0131 15:30:21.637020 4735 generic.go:334] "Generic (PLEG): container finished" podID="bdcd1840-f7b6-41b8-bad9-43e441f1cad9" containerID="9ed452621a5c0cd0284be5a6548d510c3fb765e58c43bc56458b75f0e8f07528" exitCode=0 Jan 31 15:30:21 crc kubenswrapper[4735]: I0131 15:30:21.637112 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" event={"ID":"bdcd1840-f7b6-41b8-bad9-43e441f1cad9","Type":"ContainerDied","Data":"9ed452621a5c0cd0284be5a6548d510c3fb765e58c43bc56458b75f0e8f07528"} Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.076229 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.225175 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.225293 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.225350 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.225500 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.226368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hhkc\" (UniqueName: \"kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.226476 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\" (UID: \"bdcd1840-f7b6-41b8-bad9-43e441f1cad9\") " Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.231750 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc" (OuterVolumeSpecName: "kube-api-access-9hhkc") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "kube-api-access-9hhkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.232018 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.260131 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.261585 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.266467 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.288724 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory" (OuterVolumeSpecName: "inventory") pod "bdcd1840-f7b6-41b8-bad9-43e441f1cad9" (UID: "bdcd1840-f7b6-41b8-bad9-43e441f1cad9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329240 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329280 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hhkc\" (UniqueName: \"kubernetes.io/projected/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-kube-api-access-9hhkc\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329291 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329303 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329313 4735 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.329322 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdcd1840-f7b6-41b8-bad9-43e441f1cad9-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.659631 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.659624 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg" event={"ID":"bdcd1840-f7b6-41b8-bad9-43e441f1cad9","Type":"ContainerDied","Data":"7fb0f5216c5221f70ecceb984d6113c8b4310f53f4bee2456ff33e8a295adbbc"} Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.659703 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb0f5216c5221f70ecceb984d6113c8b4310f53f4bee2456ff33e8a295adbbc" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.792543 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw"] Jan 31 15:30:23 crc kubenswrapper[4735]: E0131 15:30:23.793036 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e68421-129a-4efd-a7e6-3472b3a27ffd" containerName="collect-profiles" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.793053 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e68421-129a-4efd-a7e6-3472b3a27ffd" containerName="collect-profiles" Jan 31 15:30:23 crc kubenswrapper[4735]: E0131 15:30:23.793074 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcd1840-f7b6-41b8-bad9-43e441f1cad9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.793082 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcd1840-f7b6-41b8-bad9-43e441f1cad9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.793270 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e68421-129a-4efd-a7e6-3472b3a27ffd" containerName="collect-profiles" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.793284 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcd1840-f7b6-41b8-bad9-43e441f1cad9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.793982 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.797175 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.798120 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.800823 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.801458 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.801615 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.803519 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw"] Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.938629 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.938996 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.939034 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsd8\" (UniqueName: \"kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.939084 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:23 crc kubenswrapper[4735]: I0131 15:30:23.939172 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.041137 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.041250 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.041304 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsd8\" (UniqueName: \"kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.041379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.041470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.049352 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.049638 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.050611 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.061508 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.072147 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsd8\" (UniqueName: \"kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b49fw\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.132106 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:30:24 crc kubenswrapper[4735]: I0131 15:30:24.719762 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw"] Jan 31 15:30:25 crc kubenswrapper[4735]: I0131 15:30:25.688751 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" event={"ID":"dfa49e11-fd8b-4933-b184-a524c747ee02","Type":"ContainerStarted","Data":"3ab465b321d46796a5f79c1e0c55886c43b8d9bae3d8097842b504210d052511"} Jan 31 15:30:26 crc kubenswrapper[4735]: I0131 15:30:26.702454 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" event={"ID":"dfa49e11-fd8b-4933-b184-a524c747ee02","Type":"ContainerStarted","Data":"d2611264a3b4c5fde9b25a43fe625e425783d3e837a4f066e1b67b721e431b34"} Jan 31 15:30:26 crc kubenswrapper[4735]: I0131 15:30:26.725513 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" podStartSLOduration=3.001312495 podStartE2EDuration="3.725492148s" podCreationTimestamp="2026-01-31 15:30:23 +0000 UTC" firstStartedPulling="2026-01-31 15:30:24.720710151 +0000 UTC m=+1910.494039193" lastFinishedPulling="2026-01-31 15:30:25.444889774 +0000 UTC m=+1911.218218846" observedRunningTime="2026-01-31 15:30:26.723753689 +0000 UTC m=+1912.497082751" watchObservedRunningTime="2026-01-31 15:30:26.725492148 +0000 UTC m=+1912.498821200" Jan 31 15:31:37 crc kubenswrapper[4735]: I0131 15:31:37.346548 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:31:37 crc kubenswrapper[4735]: I0131 15:31:37.347317 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:32:07 crc kubenswrapper[4735]: I0131 15:32:07.345892 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:32:07 crc kubenswrapper[4735]: I0131 15:32:07.346474 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:32:37 crc kubenswrapper[4735]: I0131 15:32:37.346563 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:32:37 crc kubenswrapper[4735]: I0131 15:32:37.347354 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:32:37 crc kubenswrapper[4735]: I0131 15:32:37.347452 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:32:37 crc kubenswrapper[4735]: I0131 15:32:37.348597 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:32:37 crc kubenswrapper[4735]: I0131 15:32:37.348704 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f" gracePeriod=600 Jan 31 15:32:38 crc kubenswrapper[4735]: I0131 15:32:38.049225 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f" exitCode=0 Jan 31 15:32:38 crc kubenswrapper[4735]: I0131 15:32:38.049308 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f"} Jan 31 15:32:38 crc kubenswrapper[4735]: I0131 15:32:38.049523 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989"} Jan 31 15:32:38 crc kubenswrapper[4735]: I0131 15:32:38.049546 4735 scope.go:117] "RemoveContainer" containerID="7604533354e9e50054a1a8b6e0ec81bd5b4f1e002b474098510472666815e0d0" Jan 31 15:34:12 crc kubenswrapper[4735]: I0131 15:34:12.028596 4735 generic.go:334] "Generic (PLEG): container finished" podID="dfa49e11-fd8b-4933-b184-a524c747ee02" containerID="d2611264a3b4c5fde9b25a43fe625e425783d3e837a4f066e1b67b721e431b34" exitCode=0 Jan 31 15:34:12 crc kubenswrapper[4735]: I0131 15:34:12.028684 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" event={"ID":"dfa49e11-fd8b-4933-b184-a524c747ee02","Type":"ContainerDied","Data":"d2611264a3b4c5fde9b25a43fe625e425783d3e837a4f066e1b67b721e431b34"} Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.560283 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.691705 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam\") pod \"dfa49e11-fd8b-4933-b184-a524c747ee02\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.691806 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsd8\" (UniqueName: \"kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8\") pod \"dfa49e11-fd8b-4933-b184-a524c747ee02\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.691872 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0\") pod \"dfa49e11-fd8b-4933-b184-a524c747ee02\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.691956 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory\") pod \"dfa49e11-fd8b-4933-b184-a524c747ee02\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.692024 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle\") pod \"dfa49e11-fd8b-4933-b184-a524c747ee02\" (UID: \"dfa49e11-fd8b-4933-b184-a524c747ee02\") " Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.698617 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dfa49e11-fd8b-4933-b184-a524c747ee02" (UID: "dfa49e11-fd8b-4933-b184-a524c747ee02"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.699831 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8" (OuterVolumeSpecName: "kube-api-access-6lsd8") pod "dfa49e11-fd8b-4933-b184-a524c747ee02" (UID: "dfa49e11-fd8b-4933-b184-a524c747ee02"). InnerVolumeSpecName "kube-api-access-6lsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.721202 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfa49e11-fd8b-4933-b184-a524c747ee02" (UID: "dfa49e11-fd8b-4933-b184-a524c747ee02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.736725 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory" (OuterVolumeSpecName: "inventory") pod "dfa49e11-fd8b-4933-b184-a524c747ee02" (UID: "dfa49e11-fd8b-4933-b184-a524c747ee02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.748081 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "dfa49e11-fd8b-4933-b184-a524c747ee02" (UID: "dfa49e11-fd8b-4933-b184-a524c747ee02"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.794845 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.794887 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsd8\" (UniqueName: \"kubernetes.io/projected/dfa49e11-fd8b-4933-b184-a524c747ee02-kube-api-access-6lsd8\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.794907 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.794948 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:13 crc kubenswrapper[4735]: I0131 15:34:13.794963 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa49e11-fd8b-4933-b184-a524c747ee02-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.047653 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" event={"ID":"dfa49e11-fd8b-4933-b184-a524c747ee02","Type":"ContainerDied","Data":"3ab465b321d46796a5f79c1e0c55886c43b8d9bae3d8097842b504210d052511"} Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.047694 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ab465b321d46796a5f79c1e0c55886c43b8d9bae3d8097842b504210d052511" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.047715 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b49fw" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.246979 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l"] Jan 31 15:34:14 crc kubenswrapper[4735]: E0131 15:34:14.247378 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa49e11-fd8b-4933-b184-a524c747ee02" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.247399 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa49e11-fd8b-4933-b184-a524c747ee02" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.247600 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa49e11-fd8b-4933-b184-a524c747ee02" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.248214 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.251904 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.252394 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.253056 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.253573 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.254163 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.254640 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.258256 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.267632 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l"] Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405322 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405384 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405414 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405648 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405682 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405713 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405751 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405804 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.405953 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhvv\" (UniqueName: \"kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.507854 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhvv\" (UniqueName: \"kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508121 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508153 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508178 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508296 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508326 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508353 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508389 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.508414 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.510139 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.515280 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.515936 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.520361 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.526339 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.527988 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.528221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.541998 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.545786 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhvv\" (UniqueName: \"kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-grs6l\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:14 crc kubenswrapper[4735]: I0131 15:34:14.568060 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:34:15 crc kubenswrapper[4735]: W0131 15:34:15.143040 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6cf70ec_3fc5_4144_8756_bcf0a8704416.slice/crio-40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8 WatchSource:0}: Error finding container 40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8: Status 404 returned error can't find the container with id 40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8 Jan 31 15:34:15 crc kubenswrapper[4735]: I0131 15:34:15.146399 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l"] Jan 31 15:34:15 crc kubenswrapper[4735]: I0131 15:34:15.149766 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:34:16 crc kubenswrapper[4735]: I0131 15:34:16.078142 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" event={"ID":"c6cf70ec-3fc5-4144-8756-bcf0a8704416","Type":"ContainerStarted","Data":"9c9898d6d3865271018fccccafb9b38ebb8aa292d7a92f6d36f626539663f5ee"} Jan 31 15:34:16 crc kubenswrapper[4735]: I0131 15:34:16.078448 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" event={"ID":"c6cf70ec-3fc5-4144-8756-bcf0a8704416","Type":"ContainerStarted","Data":"40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8"} Jan 31 15:34:16 crc kubenswrapper[4735]: I0131 15:34:16.101837 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" podStartSLOduration=1.655018932 podStartE2EDuration="2.10181791s" podCreationTimestamp="2026-01-31 15:34:14 +0000 UTC" firstStartedPulling="2026-01-31 15:34:15.149360125 +0000 UTC m=+2140.922689187" lastFinishedPulling="2026-01-31 15:34:15.596159123 +0000 UTC m=+2141.369488165" observedRunningTime="2026-01-31 15:34:16.095213202 +0000 UTC m=+2141.868542264" watchObservedRunningTime="2026-01-31 15:34:16.10181791 +0000 UTC m=+2141.875146962" Jan 31 15:34:37 crc kubenswrapper[4735]: I0131 15:34:37.345559 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:34:37 crc kubenswrapper[4735]: I0131 15:34:37.346117 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.055494 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.058293 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.080693 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.112108 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.112258 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.112444 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5w59\" (UniqueName: \"kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.214219 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.214291 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.214346 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5w59\" (UniqueName: \"kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.214901 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.214929 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.233868 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5w59\" (UniqueName: \"kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59\") pod \"redhat-operators-cxxhv\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.392012 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:34:55 crc kubenswrapper[4735]: I0131 15:34:55.868186 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:34:56 crc kubenswrapper[4735]: I0131 15:34:56.515228 4735 generic.go:334] "Generic (PLEG): container finished" podID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerID="670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614" exitCode=0 Jan 31 15:34:56 crc kubenswrapper[4735]: I0131 15:34:56.515329 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerDied","Data":"670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614"} Jan 31 15:34:56 crc kubenswrapper[4735]: I0131 15:34:56.515505 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerStarted","Data":"a1c2ab4ae82543833a7ea8a706d3d58fc23199e9c04ee1ec56c2004f27d51886"} Jan 31 15:34:57 crc kubenswrapper[4735]: I0131 15:34:57.526347 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerStarted","Data":"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b"} Jan 31 15:34:59 crc kubenswrapper[4735]: I0131 15:34:59.554622 4735 generic.go:334] "Generic (PLEG): container finished" podID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerID="e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b" exitCode=0 Jan 31 15:34:59 crc kubenswrapper[4735]: I0131 15:34:59.556974 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerDied","Data":"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b"} Jan 31 15:35:00 crc kubenswrapper[4735]: I0131 15:35:00.564982 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerStarted","Data":"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b"} Jan 31 15:35:00 crc kubenswrapper[4735]: I0131 15:35:00.591271 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cxxhv" podStartSLOduration=2.094294528 podStartE2EDuration="5.591251471s" podCreationTimestamp="2026-01-31 15:34:55 +0000 UTC" firstStartedPulling="2026-01-31 15:34:56.518104579 +0000 UTC m=+2182.291433661" lastFinishedPulling="2026-01-31 15:35:00.015061562 +0000 UTC m=+2185.788390604" observedRunningTime="2026-01-31 15:35:00.59014373 +0000 UTC m=+2186.363472792" watchObservedRunningTime="2026-01-31 15:35:00.591251471 +0000 UTC m=+2186.364580513" Jan 31 15:35:05 crc kubenswrapper[4735]: I0131 15:35:05.392856 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:05 crc kubenswrapper[4735]: I0131 15:35:05.393515 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:06 crc kubenswrapper[4735]: I0131 15:35:06.467947 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cxxhv" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="registry-server" probeResult="failure" output=< Jan 31 15:35:06 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:35:06 crc kubenswrapper[4735]: > Jan 31 15:35:07 crc kubenswrapper[4735]: I0131 15:35:07.346045 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:35:07 crc kubenswrapper[4735]: I0131 15:35:07.346130 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.121592 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.128102 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.168056 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.218570 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.218655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqvb\" (UniqueName: \"kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.218711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.320630 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqvb\" (UniqueName: \"kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.320752 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.320914 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.321600 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.321621 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.344654 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqvb\" (UniqueName: \"kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb\") pod \"certified-operators-k97mp\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.463921 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:12 crc kubenswrapper[4735]: I0131 15:35:12.966600 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:13 crc kubenswrapper[4735]: I0131 15:35:13.682919 4735 generic.go:334] "Generic (PLEG): container finished" podID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerID="9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729" exitCode=0 Jan 31 15:35:13 crc kubenswrapper[4735]: I0131 15:35:13.682961 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerDied","Data":"9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729"} Jan 31 15:35:13 crc kubenswrapper[4735]: I0131 15:35:13.683229 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerStarted","Data":"71e63a03f6103abd1ddaf435829be595e98ee58f5b7e5cefa0322adedf72bbfc"} Jan 31 15:35:14 crc kubenswrapper[4735]: I0131 15:35:14.708727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerStarted","Data":"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1"} Jan 31 15:35:15 crc kubenswrapper[4735]: I0131 15:35:15.470021 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:15 crc kubenswrapper[4735]: I0131 15:35:15.534278 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:15 crc kubenswrapper[4735]: I0131 15:35:15.721819 4735 generic.go:334] "Generic (PLEG): container finished" podID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerID="fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1" exitCode=0 Jan 31 15:35:15 crc kubenswrapper[4735]: I0131 15:35:15.721932 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerDied","Data":"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1"} Jan 31 15:35:16 crc kubenswrapper[4735]: I0131 15:35:16.732971 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerStarted","Data":"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55"} Jan 31 15:35:16 crc kubenswrapper[4735]: I0131 15:35:16.759726 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k97mp" podStartSLOduration=2.201948169 podStartE2EDuration="4.759705456s" podCreationTimestamp="2026-01-31 15:35:12 +0000 UTC" firstStartedPulling="2026-01-31 15:35:13.685242572 +0000 UTC m=+2199.458571614" lastFinishedPulling="2026-01-31 15:35:16.242999849 +0000 UTC m=+2202.016328901" observedRunningTime="2026-01-31 15:35:16.758199002 +0000 UTC m=+2202.531528074" watchObservedRunningTime="2026-01-31 15:35:16.759705456 +0000 UTC m=+2202.533034498" Jan 31 15:35:17 crc kubenswrapper[4735]: I0131 15:35:17.901960 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:35:17 crc kubenswrapper[4735]: I0131 15:35:17.902576 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cxxhv" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="registry-server" containerID="cri-o://0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b" gracePeriod=2 Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.377837 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.440895 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5w59\" (UniqueName: \"kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59\") pod \"686562ef-e5e0-4716-9f29-f8a579bfd257\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.441392 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities\") pod \"686562ef-e5e0-4716-9f29-f8a579bfd257\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.441574 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content\") pod \"686562ef-e5e0-4716-9f29-f8a579bfd257\" (UID: \"686562ef-e5e0-4716-9f29-f8a579bfd257\") " Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.442229 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities" (OuterVolumeSpecName: "utilities") pod "686562ef-e5e0-4716-9f29-f8a579bfd257" (UID: "686562ef-e5e0-4716-9f29-f8a579bfd257"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.447570 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59" (OuterVolumeSpecName: "kube-api-access-t5w59") pod "686562ef-e5e0-4716-9f29-f8a579bfd257" (UID: "686562ef-e5e0-4716-9f29-f8a579bfd257"). InnerVolumeSpecName "kube-api-access-t5w59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.448228 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.449296 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5w59\" (UniqueName: \"kubernetes.io/projected/686562ef-e5e0-4716-9f29-f8a579bfd257-kube-api-access-t5w59\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.594137 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "686562ef-e5e0-4716-9f29-f8a579bfd257" (UID: "686562ef-e5e0-4716-9f29-f8a579bfd257"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.652971 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/686562ef-e5e0-4716-9f29-f8a579bfd257-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.758650 4735 generic.go:334] "Generic (PLEG): container finished" podID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerID="0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b" exitCode=0 Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.758692 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerDied","Data":"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b"} Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.758723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxxhv" event={"ID":"686562ef-e5e0-4716-9f29-f8a579bfd257","Type":"ContainerDied","Data":"a1c2ab4ae82543833a7ea8a706d3d58fc23199e9c04ee1ec56c2004f27d51886"} Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.758746 4735 scope.go:117] "RemoveContainer" containerID="0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.758800 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxxhv" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.798671 4735 scope.go:117] "RemoveContainer" containerID="e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.832484 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.845289 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cxxhv"] Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.846942 4735 scope.go:117] "RemoveContainer" containerID="670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.918795 4735 scope.go:117] "RemoveContainer" containerID="0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b" Jan 31 15:35:18 crc kubenswrapper[4735]: E0131 15:35:18.919984 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b\": container with ID starting with 0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b not found: ID does not exist" containerID="0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.920394 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b"} err="failed to get container status \"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b\": rpc error: code = NotFound desc = could not find container \"0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b\": container with ID starting with 0defc6eb8eb4aaaa6cf79156710cc2a6328e4ba24beeeac7a02eb7b31d4dcc1b not found: ID does not exist" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.920455 4735 scope.go:117] "RemoveContainer" containerID="e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b" Jan 31 15:35:18 crc kubenswrapper[4735]: E0131 15:35:18.921019 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b\": container with ID starting with e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b not found: ID does not exist" containerID="e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.921061 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b"} err="failed to get container status \"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b\": rpc error: code = NotFound desc = could not find container \"e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b\": container with ID starting with e5e62094b7156c4a64ebc48e7d560d4b12a0abb3a0ff6984c246b85c6cf05f0b not found: ID does not exist" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.921107 4735 scope.go:117] "RemoveContainer" containerID="670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614" Jan 31 15:35:18 crc kubenswrapper[4735]: E0131 15:35:18.921549 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614\": container with ID starting with 670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614 not found: ID does not exist" containerID="670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614" Jan 31 15:35:18 crc kubenswrapper[4735]: I0131 15:35:18.921597 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614"} err="failed to get container status \"670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614\": rpc error: code = NotFound desc = could not find container \"670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614\": container with ID starting with 670d4b0339cf107f3ae50b0a9335d333b83df31ebda4708ab77f3a5a4fed6614 not found: ID does not exist" Jan 31 15:35:19 crc kubenswrapper[4735]: I0131 15:35:19.564815 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" path="/var/lib/kubelet/pods/686562ef-e5e0-4716-9f29-f8a579bfd257/volumes" Jan 31 15:35:22 crc kubenswrapper[4735]: I0131 15:35:22.464965 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:22 crc kubenswrapper[4735]: I0131 15:35:22.466254 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:22 crc kubenswrapper[4735]: I0131 15:35:22.537681 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:22 crc kubenswrapper[4735]: I0131 15:35:22.966254 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:23 crc kubenswrapper[4735]: I0131 15:35:23.030203 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:24 crc kubenswrapper[4735]: I0131 15:35:24.942159 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k97mp" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="registry-server" containerID="cri-o://2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55" gracePeriod=2 Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.552912 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.576316 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities\") pod \"cf284e70-1150-4681-8bac-400b8bceb9cd\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.576415 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqqvb\" (UniqueName: \"kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb\") pod \"cf284e70-1150-4681-8bac-400b8bceb9cd\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.576562 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content\") pod \"cf284e70-1150-4681-8bac-400b8bceb9cd\" (UID: \"cf284e70-1150-4681-8bac-400b8bceb9cd\") " Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.578451 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities" (OuterVolumeSpecName: "utilities") pod "cf284e70-1150-4681-8bac-400b8bceb9cd" (UID: "cf284e70-1150-4681-8bac-400b8bceb9cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.585113 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb" (OuterVolumeSpecName: "kube-api-access-wqqvb") pod "cf284e70-1150-4681-8bac-400b8bceb9cd" (UID: "cf284e70-1150-4681-8bac-400b8bceb9cd"). InnerVolumeSpecName "kube-api-access-wqqvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.652956 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf284e70-1150-4681-8bac-400b8bceb9cd" (UID: "cf284e70-1150-4681-8bac-400b8bceb9cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.679319 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.679363 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqqvb\" (UniqueName: \"kubernetes.io/projected/cf284e70-1150-4681-8bac-400b8bceb9cd-kube-api-access-wqqvb\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.679377 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf284e70-1150-4681-8bac-400b8bceb9cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.956936 4735 generic.go:334] "Generic (PLEG): container finished" podID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerID="2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55" exitCode=0 Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.957011 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerDied","Data":"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55"} Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.957069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k97mp" event={"ID":"cf284e70-1150-4681-8bac-400b8bceb9cd","Type":"ContainerDied","Data":"71e63a03f6103abd1ddaf435829be595e98ee58f5b7e5cefa0322adedf72bbfc"} Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.957107 4735 scope.go:117] "RemoveContainer" containerID="2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.958340 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k97mp" Jan 31 15:35:25 crc kubenswrapper[4735]: I0131 15:35:25.984506 4735 scope.go:117] "RemoveContainer" containerID="fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.005895 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.019197 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k97mp"] Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.024870 4735 scope.go:117] "RemoveContainer" containerID="9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.073448 4735 scope.go:117] "RemoveContainer" containerID="2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55" Jan 31 15:35:26 crc kubenswrapper[4735]: E0131 15:35:26.073979 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55\": container with ID starting with 2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55 not found: ID does not exist" containerID="2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.074027 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55"} err="failed to get container status \"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55\": rpc error: code = NotFound desc = could not find container \"2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55\": container with ID starting with 2e3fdb0c91d059d99986bd8fd07f5c24093476961537f56b2b693ea021766b55 not found: ID does not exist" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.074062 4735 scope.go:117] "RemoveContainer" containerID="fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1" Jan 31 15:35:26 crc kubenswrapper[4735]: E0131 15:35:26.074444 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1\": container with ID starting with fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1 not found: ID does not exist" containerID="fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.074485 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1"} err="failed to get container status \"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1\": rpc error: code = NotFound desc = could not find container \"fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1\": container with ID starting with fb946dc7de1fb83fdd0219e2ef6011a5159cfaec3ad94da9b198171905a34af1 not found: ID does not exist" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.074512 4735 scope.go:117] "RemoveContainer" containerID="9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729" Jan 31 15:35:26 crc kubenswrapper[4735]: E0131 15:35:26.075157 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729\": container with ID starting with 9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729 not found: ID does not exist" containerID="9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729" Jan 31 15:35:26 crc kubenswrapper[4735]: I0131 15:35:26.075197 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729"} err="failed to get container status \"9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729\": rpc error: code = NotFound desc = could not find container \"9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729\": container with ID starting with 9194cdf0ec8adb8a1a3ad473c88189a8fc115f28ae3aa09ad1a369770707f729 not found: ID does not exist" Jan 31 15:35:27 crc kubenswrapper[4735]: I0131 15:35:27.560492 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" path="/var/lib/kubelet/pods/cf284e70-1150-4681-8bac-400b8bceb9cd/volumes" Jan 31 15:35:37 crc kubenswrapper[4735]: I0131 15:35:37.346018 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:35:37 crc kubenswrapper[4735]: I0131 15:35:37.346720 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:35:37 crc kubenswrapper[4735]: I0131 15:35:37.346791 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:35:37 crc kubenswrapper[4735]: I0131 15:35:37.347879 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:35:37 crc kubenswrapper[4735]: I0131 15:35:37.347981 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" gracePeriod=600 Jan 31 15:35:37 crc kubenswrapper[4735]: E0131 15:35:37.476307 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:35:38 crc kubenswrapper[4735]: I0131 15:35:38.101314 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" exitCode=0 Jan 31 15:35:38 crc kubenswrapper[4735]: I0131 15:35:38.101399 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989"} Jan 31 15:35:38 crc kubenswrapper[4735]: I0131 15:35:38.101505 4735 scope.go:117] "RemoveContainer" containerID="686cc2a21f64050563046209fdd632dc47758830c538cdea4dce455a944e002f" Jan 31 15:35:38 crc kubenswrapper[4735]: I0131 15:35:38.102582 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:35:38 crc kubenswrapper[4735]: E0131 15:35:38.103347 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:35:51 crc kubenswrapper[4735]: I0131 15:35:51.540602 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:35:51 crc kubenswrapper[4735]: E0131 15:35:51.541905 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:36:04 crc kubenswrapper[4735]: I0131 15:36:04.541285 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:36:04 crc kubenswrapper[4735]: E0131 15:36:04.542399 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:36:16 crc kubenswrapper[4735]: I0131 15:36:16.540253 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:36:16 crc kubenswrapper[4735]: E0131 15:36:16.541163 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:36:26 crc kubenswrapper[4735]: I0131 15:36:26.636848 4735 generic.go:334] "Generic (PLEG): container finished" podID="c6cf70ec-3fc5-4144-8756-bcf0a8704416" containerID="9c9898d6d3865271018fccccafb9b38ebb8aa292d7a92f6d36f626539663f5ee" exitCode=0 Jan 31 15:36:26 crc kubenswrapper[4735]: I0131 15:36:26.636970 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" event={"ID":"c6cf70ec-3fc5-4144-8756-bcf0a8704416","Type":"ContainerDied","Data":"9c9898d6d3865271018fccccafb9b38ebb8aa292d7a92f6d36f626539663f5ee"} Jan 31 15:36:27 crc kubenswrapper[4735]: I0131 15:36:27.540665 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:36:27 crc kubenswrapper[4735]: E0131 15:36:27.541187 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.108503 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296546 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnhvv\" (UniqueName: \"kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296606 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296646 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296673 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296738 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296762 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296794 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296834 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.296857 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0\") pod \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\" (UID: \"c6cf70ec-3fc5-4144-8756-bcf0a8704416\") " Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.302596 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv" (OuterVolumeSpecName: "kube-api-access-hnhvv") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "kube-api-access-hnhvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.308148 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.324633 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.345117 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory" (OuterVolumeSpecName: "inventory") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.354112 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.356797 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.363881 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.365346 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.376664 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c6cf70ec-3fc5-4144-8756-bcf0a8704416" (UID: "c6cf70ec-3fc5-4144-8756-bcf0a8704416"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399278 4735 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399311 4735 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399321 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399330 4735 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399341 4735 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399351 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnhvv\" (UniqueName: \"kubernetes.io/projected/c6cf70ec-3fc5-4144-8756-bcf0a8704416-kube-api-access-hnhvv\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399359 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399368 4735 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.399376 4735 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c6cf70ec-3fc5-4144-8756-bcf0a8704416-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.665896 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" event={"ID":"c6cf70ec-3fc5-4144-8756-bcf0a8704416","Type":"ContainerDied","Data":"40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8"} Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.665952 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c0ef3e39b6d7152aafc0bc28f7247f627cf79230e42c7f91a4d0f8824ceea8" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.666036 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-grs6l" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839190 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth"] Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839756 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="extract-utilities" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839786 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="extract-utilities" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839810 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839823 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839853 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="extract-content" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839868 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="extract-content" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839888 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="extract-content" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839899 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="extract-content" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839927 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="extract-utilities" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839939 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="extract-utilities" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.839957 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.839969 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: E0131 15:36:28.840011 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cf70ec-3fc5-4144-8756-bcf0a8704416" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.840027 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cf70ec-3fc5-4144-8756-bcf0a8704416" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.840369 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cf70ec-3fc5-4144-8756-bcf0a8704416" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.840399 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="686562ef-e5e0-4716-9f29-f8a579bfd257" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.840454 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf284e70-1150-4681-8bac-400b8bceb9cd" containerName="registry-server" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.841652 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.846893 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.846929 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.847111 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.847150 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.848667 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tmg2j" Jan 31 15:36:28 crc kubenswrapper[4735]: I0131 15:36:28.849100 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth"] Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.010385 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.010565 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.010622 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.010665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.010907 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d887d\" (UniqueName: \"kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.011074 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.011144 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.113512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.113667 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.113753 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.113852 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.114089 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d887d\" (UniqueName: \"kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.114263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.114328 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.119728 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.119914 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.121575 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.122788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.131108 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.132686 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.146183 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d887d\" (UniqueName: \"kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cdsth\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.166889 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:36:29 crc kubenswrapper[4735]: I0131 15:36:29.752355 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth"] Jan 31 15:36:30 crc kubenswrapper[4735]: I0131 15:36:30.692308 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" event={"ID":"ad304d37-c310-4b94-b535-b75f3ee49e81","Type":"ContainerStarted","Data":"6ba68c431a71d06d16efbf0ecfb586d5c91a900ed295821bdaa1719997c68be3"} Jan 31 15:36:30 crc kubenswrapper[4735]: I0131 15:36:30.694369 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" event={"ID":"ad304d37-c310-4b94-b535-b75f3ee49e81","Type":"ContainerStarted","Data":"bfe3168777ce5566efd2e2468ce21876885da4a203a6768d8d18938fc94af516"} Jan 31 15:36:30 crc kubenswrapper[4735]: I0131 15:36:30.726903 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" podStartSLOduration=2.299007549 podStartE2EDuration="2.726875086s" podCreationTimestamp="2026-01-31 15:36:28 +0000 UTC" firstStartedPulling="2026-01-31 15:36:29.762514923 +0000 UTC m=+2275.535843965" lastFinishedPulling="2026-01-31 15:36:30.19038243 +0000 UTC m=+2275.963711502" observedRunningTime="2026-01-31 15:36:30.723288095 +0000 UTC m=+2276.496617157" watchObservedRunningTime="2026-01-31 15:36:30.726875086 +0000 UTC m=+2276.500204148" Jan 31 15:36:40 crc kubenswrapper[4735]: I0131 15:36:40.540182 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:36:40 crc kubenswrapper[4735]: E0131 15:36:40.541685 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:36:51 crc kubenswrapper[4735]: I0131 15:36:51.546372 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:36:51 crc kubenswrapper[4735]: E0131 15:36:51.547377 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:37:02 crc kubenswrapper[4735]: I0131 15:37:02.539815 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:37:02 crc kubenswrapper[4735]: E0131 15:37:02.540661 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:37:13 crc kubenswrapper[4735]: I0131 15:37:13.540571 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:37:13 crc kubenswrapper[4735]: E0131 15:37:13.541790 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:37:27 crc kubenswrapper[4735]: I0131 15:37:27.540389 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:37:27 crc kubenswrapper[4735]: E0131 15:37:27.541226 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:37:41 crc kubenswrapper[4735]: I0131 15:37:41.541057 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:37:41 crc kubenswrapper[4735]: E0131 15:37:41.542009 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:37:55 crc kubenswrapper[4735]: I0131 15:37:55.552520 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:37:55 crc kubenswrapper[4735]: E0131 15:37:55.554040 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:38:10 crc kubenswrapper[4735]: I0131 15:38:10.539667 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:38:10 crc kubenswrapper[4735]: E0131 15:38:10.540303 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:38:23 crc kubenswrapper[4735]: I0131 15:38:23.539799 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:38:23 crc kubenswrapper[4735]: E0131 15:38:23.540545 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:38:37 crc kubenswrapper[4735]: I0131 15:38:37.540111 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:38:37 crc kubenswrapper[4735]: E0131 15:38:37.541010 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:38:50 crc kubenswrapper[4735]: I0131 15:38:50.540982 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:38:50 crc kubenswrapper[4735]: E0131 15:38:50.541843 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:38:55 crc kubenswrapper[4735]: I0131 15:38:55.167017 4735 generic.go:334] "Generic (PLEG): container finished" podID="ad304d37-c310-4b94-b535-b75f3ee49e81" containerID="6ba68c431a71d06d16efbf0ecfb586d5c91a900ed295821bdaa1719997c68be3" exitCode=0 Jan 31 15:38:55 crc kubenswrapper[4735]: I0131 15:38:55.167138 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" event={"ID":"ad304d37-c310-4b94-b535-b75f3ee49e81","Type":"ContainerDied","Data":"6ba68c431a71d06d16efbf0ecfb586d5c91a900ed295821bdaa1719997c68be3"} Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.631437 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.786664 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.786742 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.787629 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d887d\" (UniqueName: \"kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.787762 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.787898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.787985 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.788043 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0\") pod \"ad304d37-c310-4b94-b535-b75f3ee49e81\" (UID: \"ad304d37-c310-4b94-b535-b75f3ee49e81\") " Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.795874 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.796661 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d" (OuterVolumeSpecName: "kube-api-access-d887d") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "kube-api-access-d887d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.829396 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.841037 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.845688 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.849024 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.853918 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory" (OuterVolumeSpecName: "inventory") pod "ad304d37-c310-4b94-b535-b75f3ee49e81" (UID: "ad304d37-c310-4b94-b535-b75f3ee49e81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890630 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890677 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890700 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d887d\" (UniqueName: \"kubernetes.io/projected/ad304d37-c310-4b94-b535-b75f3ee49e81-kube-api-access-d887d\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890719 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890736 4735 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890755 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:56 crc kubenswrapper[4735]: I0131 15:38:56.890773 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ad304d37-c310-4b94-b535-b75f3ee49e81-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 15:38:57 crc kubenswrapper[4735]: I0131 15:38:57.194747 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" event={"ID":"ad304d37-c310-4b94-b535-b75f3ee49e81","Type":"ContainerDied","Data":"bfe3168777ce5566efd2e2468ce21876885da4a203a6768d8d18938fc94af516"} Jan 31 15:38:57 crc kubenswrapper[4735]: I0131 15:38:57.194791 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe3168777ce5566efd2e2468ce21876885da4a203a6768d8d18938fc94af516" Jan 31 15:38:57 crc kubenswrapper[4735]: I0131 15:38:57.194801 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cdsth" Jan 31 15:39:01 crc kubenswrapper[4735]: I0131 15:39:01.539681 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:39:01 crc kubenswrapper[4735]: E0131 15:39:01.540169 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.270830 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:13 crc kubenswrapper[4735]: E0131 15:39:13.272585 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad304d37-c310-4b94-b535-b75f3ee49e81" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.272612 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad304d37-c310-4b94-b535-b75f3ee49e81" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.273118 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad304d37-c310-4b94-b535-b75f3ee49e81" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.275280 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.294295 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.327991 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.328285 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zbn7\" (UniqueName: \"kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.328327 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.430096 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zbn7\" (UniqueName: \"kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.430545 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.431044 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.430976 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.431313 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.463565 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zbn7\" (UniqueName: \"kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7\") pod \"community-operators-gvgxr\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.541770 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:39:13 crc kubenswrapper[4735]: E0131 15:39:13.542029 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:39:13 crc kubenswrapper[4735]: I0131 15:39:13.620537 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:14 crc kubenswrapper[4735]: W0131 15:39:14.179778 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod278035a9_e20c_4592_8fc1_7136400f3c2d.slice/crio-35b85e7483e7eef05c04487fb31647159126652e49ecd7fb78e3b8de332aed6f WatchSource:0}: Error finding container 35b85e7483e7eef05c04487fb31647159126652e49ecd7fb78e3b8de332aed6f: Status 404 returned error can't find the container with id 35b85e7483e7eef05c04487fb31647159126652e49ecd7fb78e3b8de332aed6f Jan 31 15:39:14 crc kubenswrapper[4735]: I0131 15:39:14.184846 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:14 crc kubenswrapper[4735]: I0131 15:39:14.371790 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerStarted","Data":"35b85e7483e7eef05c04487fb31647159126652e49ecd7fb78e3b8de332aed6f"} Jan 31 15:39:15 crc kubenswrapper[4735]: I0131 15:39:15.398725 4735 generic.go:334] "Generic (PLEG): container finished" podID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerID="074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2" exitCode=0 Jan 31 15:39:15 crc kubenswrapper[4735]: I0131 15:39:15.398798 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerDied","Data":"074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2"} Jan 31 15:39:15 crc kubenswrapper[4735]: I0131 15:39:15.408508 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:39:17 crc kubenswrapper[4735]: I0131 15:39:17.430210 4735 generic.go:334] "Generic (PLEG): container finished" podID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerID="e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61" exitCode=0 Jan 31 15:39:17 crc kubenswrapper[4735]: I0131 15:39:17.430521 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerDied","Data":"e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61"} Jan 31 15:39:18 crc kubenswrapper[4735]: I0131 15:39:18.446800 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerStarted","Data":"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae"} Jan 31 15:39:18 crc kubenswrapper[4735]: I0131 15:39:18.484620 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gvgxr" podStartSLOduration=3.012772752 podStartE2EDuration="5.484548125s" podCreationTimestamp="2026-01-31 15:39:13 +0000 UTC" firstStartedPulling="2026-01-31 15:39:15.407909306 +0000 UTC m=+2441.181238388" lastFinishedPulling="2026-01-31 15:39:17.879684679 +0000 UTC m=+2443.653013761" observedRunningTime="2026-01-31 15:39:18.472579209 +0000 UTC m=+2444.245908281" watchObservedRunningTime="2026-01-31 15:39:18.484548125 +0000 UTC m=+2444.257877207" Jan 31 15:39:23 crc kubenswrapper[4735]: I0131 15:39:23.621501 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:23 crc kubenswrapper[4735]: I0131 15:39:23.622260 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:23 crc kubenswrapper[4735]: I0131 15:39:23.694839 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:24 crc kubenswrapper[4735]: I0131 15:39:24.587405 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:24 crc kubenswrapper[4735]: I0131 15:39:24.657241 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:26 crc kubenswrapper[4735]: I0131 15:39:26.527234 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gvgxr" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="registry-server" containerID="cri-o://3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae" gracePeriod=2 Jan 31 15:39:26 crc kubenswrapper[4735]: I0131 15:39:26.984274 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.094399 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities\") pod \"278035a9-e20c-4592-8fc1-7136400f3c2d\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.094630 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content\") pod \"278035a9-e20c-4592-8fc1-7136400f3c2d\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.094754 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zbn7\" (UniqueName: \"kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7\") pod \"278035a9-e20c-4592-8fc1-7136400f3c2d\" (UID: \"278035a9-e20c-4592-8fc1-7136400f3c2d\") " Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.095350 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities" (OuterVolumeSpecName: "utilities") pod "278035a9-e20c-4592-8fc1-7136400f3c2d" (UID: "278035a9-e20c-4592-8fc1-7136400f3c2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.095568 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.117010 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7" (OuterVolumeSpecName: "kube-api-access-4zbn7") pod "278035a9-e20c-4592-8fc1-7136400f3c2d" (UID: "278035a9-e20c-4592-8fc1-7136400f3c2d"). InnerVolumeSpecName "kube-api-access-4zbn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.185927 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "278035a9-e20c-4592-8fc1-7136400f3c2d" (UID: "278035a9-e20c-4592-8fc1-7136400f3c2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.196799 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/278035a9-e20c-4592-8fc1-7136400f3c2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.196829 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zbn7\" (UniqueName: \"kubernetes.io/projected/278035a9-e20c-4592-8fc1-7136400f3c2d-kube-api-access-4zbn7\") on node \"crc\" DevicePath \"\"" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.541899 4735 generic.go:334] "Generic (PLEG): container finished" podID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerID="3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae" exitCode=0 Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.543121 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvgxr" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.565273 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerDied","Data":"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae"} Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.565326 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvgxr" event={"ID":"278035a9-e20c-4592-8fc1-7136400f3c2d","Type":"ContainerDied","Data":"35b85e7483e7eef05c04487fb31647159126652e49ecd7fb78e3b8de332aed6f"} Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.565352 4735 scope.go:117] "RemoveContainer" containerID="3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.577234 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.585862 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gvgxr"] Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.598193 4735 scope.go:117] "RemoveContainer" containerID="e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.637858 4735 scope.go:117] "RemoveContainer" containerID="074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.675769 4735 scope.go:117] "RemoveContainer" containerID="3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae" Jan 31 15:39:27 crc kubenswrapper[4735]: E0131 15:39:27.676241 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae\": container with ID starting with 3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae not found: ID does not exist" containerID="3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.676277 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae"} err="failed to get container status \"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae\": rpc error: code = NotFound desc = could not find container \"3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae\": container with ID starting with 3fa2dfb7583c7d4359cf9a39f33a22ea4ae7c0a38395fa1a55985cd6b9e8d0ae not found: ID does not exist" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.676319 4735 scope.go:117] "RemoveContainer" containerID="e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61" Jan 31 15:39:27 crc kubenswrapper[4735]: E0131 15:39:27.676818 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61\": container with ID starting with e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61 not found: ID does not exist" containerID="e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.676856 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61"} err="failed to get container status \"e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61\": rpc error: code = NotFound desc = could not find container \"e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61\": container with ID starting with e856299668c2b81816e32171896194d35e105589d613d3dabe89475271870b61 not found: ID does not exist" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.676869 4735 scope.go:117] "RemoveContainer" containerID="074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2" Jan 31 15:39:27 crc kubenswrapper[4735]: E0131 15:39:27.677117 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2\": container with ID starting with 074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2 not found: ID does not exist" containerID="074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2" Jan 31 15:39:27 crc kubenswrapper[4735]: I0131 15:39:27.677138 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2"} err="failed to get container status \"074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2\": rpc error: code = NotFound desc = could not find container \"074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2\": container with ID starting with 074b5bea06c345235e28a95145b4499b4c58289861cf1ed957a3fb8db670dac2 not found: ID does not exist" Jan 31 15:39:28 crc kubenswrapper[4735]: I0131 15:39:28.540361 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:39:28 crc kubenswrapper[4735]: E0131 15:39:28.541158 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:39:29 crc kubenswrapper[4735]: I0131 15:39:29.563563 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" path="/var/lib/kubelet/pods/278035a9-e20c-4592-8fc1-7136400f3c2d/volumes" Jan 31 15:39:40 crc kubenswrapper[4735]: I0131 15:39:40.540901 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:39:40 crc kubenswrapper[4735]: E0131 15:39:40.542303 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:39:53 crc kubenswrapper[4735]: I0131 15:39:53.540391 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:39:53 crc kubenswrapper[4735]: E0131 15:39:53.541198 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.116630 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 15:39:55 crc kubenswrapper[4735]: E0131 15:39:55.117615 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="extract-utilities" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.117638 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="extract-utilities" Jan 31 15:39:55 crc kubenswrapper[4735]: E0131 15:39:55.117665 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="extract-content" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.117676 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="extract-content" Jan 31 15:39:55 crc kubenswrapper[4735]: E0131 15:39:55.117694 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="registry-server" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.117708 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="registry-server" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.117989 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="278035a9-e20c-4592-8fc1-7136400f3c2d" containerName="registry-server" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.118971 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.122001 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.122547 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.124484 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.128273 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wk4bl" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.160982 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.229511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.229737 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzr7\" (UniqueName: \"kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.229955 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230018 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230042 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230086 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230215 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230258 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.230415 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332136 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332166 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332213 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332259 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332300 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332351 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332399 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzr7\" (UniqueName: \"kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.332976 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.333071 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.333767 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.334572 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.335848 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.341583 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.341814 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.342416 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.368470 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzr7\" (UniqueName: \"kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.387196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.461918 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 15:39:55 crc kubenswrapper[4735]: I0131 15:39:55.919683 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 15:39:56 crc kubenswrapper[4735]: I0131 15:39:56.858758 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f","Type":"ContainerStarted","Data":"04a16c258f4f15e3937df05ddda265976b9e2cdcf943542616977c3baf8dd5e7"} Jan 31 15:40:04 crc kubenswrapper[4735]: I0131 15:40:04.540364 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:40:04 crc kubenswrapper[4735]: E0131 15:40:04.541361 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.440563 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.443605 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.455715 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.478843 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.478945 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.478980 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmln6\" (UniqueName: \"kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.581123 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.581177 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmln6\" (UniqueName: \"kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.581326 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.581771 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.581783 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.599196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmln6\" (UniqueName: \"kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6\") pod \"redhat-marketplace-46st6\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:09 crc kubenswrapper[4735]: I0131 15:40:09.803855 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:19 crc kubenswrapper[4735]: I0131 15:40:19.540209 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:40:19 crc kubenswrapper[4735]: E0131 15:40:19.541148 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:40:25 crc kubenswrapper[4735]: E0131 15:40:25.930389 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 15:40:25 crc kubenswrapper[4735]: E0131 15:40:25.930939 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mzr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 15:40:25 crc kubenswrapper[4735]: E0131 15:40:25.932264 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" Jan 31 15:40:26 crc kubenswrapper[4735]: E0131 15:40:26.136317 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" Jan 31 15:40:26 crc kubenswrapper[4735]: I0131 15:40:26.218891 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:27 crc kubenswrapper[4735]: I0131 15:40:27.153004 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b494b51-886a-43e0-adad-627fe241f87d" containerID="8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31" exitCode=0 Jan 31 15:40:27 crc kubenswrapper[4735]: I0131 15:40:27.153072 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerDied","Data":"8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31"} Jan 31 15:40:27 crc kubenswrapper[4735]: I0131 15:40:27.153463 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerStarted","Data":"5efad6acaf9af05ff7e27b118545807fe2de8995107357da974f38bbee2fb085"} Jan 31 15:40:28 crc kubenswrapper[4735]: I0131 15:40:28.167266 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b494b51-886a-43e0-adad-627fe241f87d" containerID="67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646" exitCode=0 Jan 31 15:40:28 crc kubenswrapper[4735]: I0131 15:40:28.167456 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerDied","Data":"67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646"} Jan 31 15:40:29 crc kubenswrapper[4735]: I0131 15:40:29.177491 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerStarted","Data":"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268"} Jan 31 15:40:29 crc kubenswrapper[4735]: I0131 15:40:29.200773 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-46st6" podStartSLOduration=18.762304003 podStartE2EDuration="20.200749198s" podCreationTimestamp="2026-01-31 15:40:09 +0000 UTC" firstStartedPulling="2026-01-31 15:40:27.156526774 +0000 UTC m=+2512.929855826" lastFinishedPulling="2026-01-31 15:40:28.594971979 +0000 UTC m=+2514.368301021" observedRunningTime="2026-01-31 15:40:29.19338541 +0000 UTC m=+2514.966714452" watchObservedRunningTime="2026-01-31 15:40:29.200749198 +0000 UTC m=+2514.974078250" Jan 31 15:40:29 crc kubenswrapper[4735]: I0131 15:40:29.804860 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:29 crc kubenswrapper[4735]: I0131 15:40:29.805140 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:30 crc kubenswrapper[4735]: I0131 15:40:30.852153 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-46st6" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="registry-server" probeResult="failure" output=< Jan 31 15:40:30 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:40:30 crc kubenswrapper[4735]: > Jan 31 15:40:31 crc kubenswrapper[4735]: I0131 15:40:31.540667 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:40:31 crc kubenswrapper[4735]: E0131 15:40:31.541186 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:40:39 crc kubenswrapper[4735]: I0131 15:40:39.882122 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:39 crc kubenswrapper[4735]: I0131 15:40:39.949858 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:40 crc kubenswrapper[4735]: I0131 15:40:40.624171 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.310666 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-46st6" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="registry-server" containerID="cri-o://d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268" gracePeriod=2 Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.838730 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.956460 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content\") pod \"1b494b51-886a-43e0-adad-627fe241f87d\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.956574 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities\") pod \"1b494b51-886a-43e0-adad-627fe241f87d\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.956727 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmln6\" (UniqueName: \"kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6\") pod \"1b494b51-886a-43e0-adad-627fe241f87d\" (UID: \"1b494b51-886a-43e0-adad-627fe241f87d\") " Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.957610 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities" (OuterVolumeSpecName: "utilities") pod "1b494b51-886a-43e0-adad-627fe241f87d" (UID: "1b494b51-886a-43e0-adad-627fe241f87d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.969367 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6" (OuterVolumeSpecName: "kube-api-access-fmln6") pod "1b494b51-886a-43e0-adad-627fe241f87d" (UID: "1b494b51-886a-43e0-adad-627fe241f87d"). InnerVolumeSpecName "kube-api-access-fmln6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:40:41 crc kubenswrapper[4735]: I0131 15:40:41.979225 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b494b51-886a-43e0-adad-627fe241f87d" (UID: "1b494b51-886a-43e0-adad-627fe241f87d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.025138 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.059226 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmln6\" (UniqueName: \"kubernetes.io/projected/1b494b51-886a-43e0-adad-627fe241f87d-kube-api-access-fmln6\") on node \"crc\" DevicePath \"\"" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.059275 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.059287 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b494b51-886a-43e0-adad-627fe241f87d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.325309 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b494b51-886a-43e0-adad-627fe241f87d" containerID="d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268" exitCode=0 Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.325356 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerDied","Data":"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268"} Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.325383 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46st6" event={"ID":"1b494b51-886a-43e0-adad-627fe241f87d","Type":"ContainerDied","Data":"5efad6acaf9af05ff7e27b118545807fe2de8995107357da974f38bbee2fb085"} Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.325402 4735 scope.go:117] "RemoveContainer" containerID="d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.325432 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46st6" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.360496 4735 scope.go:117] "RemoveContainer" containerID="67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.366591 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.373713 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-46st6"] Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.382513 4735 scope.go:117] "RemoveContainer" containerID="8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.404860 4735 scope.go:117] "RemoveContainer" containerID="d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268" Jan 31 15:40:42 crc kubenswrapper[4735]: E0131 15:40:42.405588 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268\": container with ID starting with d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268 not found: ID does not exist" containerID="d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.405625 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268"} err="failed to get container status \"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268\": rpc error: code = NotFound desc = could not find container \"d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268\": container with ID starting with d52e6eeb8a5b1a4fb23cc82d8576d9a0e61273e3e882893f25d60c0528ceb268 not found: ID does not exist" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.405653 4735 scope.go:117] "RemoveContainer" containerID="67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646" Jan 31 15:40:42 crc kubenswrapper[4735]: E0131 15:40:42.406078 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646\": container with ID starting with 67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646 not found: ID does not exist" containerID="67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.406112 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646"} err="failed to get container status \"67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646\": rpc error: code = NotFound desc = could not find container \"67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646\": container with ID starting with 67e49abe5ab9c1211abd3a8cb31cc4fd049d869784f5e2f94e2b5e28d49c1646 not found: ID does not exist" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.406161 4735 scope.go:117] "RemoveContainer" containerID="8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31" Jan 31 15:40:42 crc kubenswrapper[4735]: E0131 15:40:42.406578 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31\": container with ID starting with 8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31 not found: ID does not exist" containerID="8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31" Jan 31 15:40:42 crc kubenswrapper[4735]: I0131 15:40:42.406605 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31"} err="failed to get container status \"8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31\": rpc error: code = NotFound desc = could not find container \"8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31\": container with ID starting with 8a3198d13b400e4e488f9ea54067ee0bfdf1b898ba835ec6383754fafa00ef31 not found: ID does not exist" Jan 31 15:40:43 crc kubenswrapper[4735]: I0131 15:40:43.337968 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f","Type":"ContainerStarted","Data":"7c26d91435329394ca3d916d45115754bf7d34122a0687fc85c331ef6e90a7b0"} Jan 31 15:40:43 crc kubenswrapper[4735]: I0131 15:40:43.367690 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.277741114 podStartE2EDuration="49.367670374s" podCreationTimestamp="2026-01-31 15:39:54 +0000 UTC" firstStartedPulling="2026-01-31 15:39:55.931025285 +0000 UTC m=+2481.704354367" lastFinishedPulling="2026-01-31 15:40:42.020954585 +0000 UTC m=+2527.794283627" observedRunningTime="2026-01-31 15:40:43.36506542 +0000 UTC m=+2529.138394482" watchObservedRunningTime="2026-01-31 15:40:43.367670374 +0000 UTC m=+2529.140999426" Jan 31 15:40:43 crc kubenswrapper[4735]: I0131 15:40:43.558532 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b494b51-886a-43e0-adad-627fe241f87d" path="/var/lib/kubelet/pods/1b494b51-886a-43e0-adad-627fe241f87d/volumes" Jan 31 15:40:45 crc kubenswrapper[4735]: I0131 15:40:45.560757 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:40:46 crc kubenswrapper[4735]: I0131 15:40:46.397789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d"} Jan 31 15:43:07 crc kubenswrapper[4735]: I0131 15:43:07.345936 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:43:07 crc kubenswrapper[4735]: I0131 15:43:07.346363 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:43:37 crc kubenswrapper[4735]: I0131 15:43:37.345736 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:43:37 crc kubenswrapper[4735]: I0131 15:43:37.346332 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:44:07 crc kubenswrapper[4735]: I0131 15:44:07.346080 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:44:07 crc kubenswrapper[4735]: I0131 15:44:07.346698 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:44:07 crc kubenswrapper[4735]: I0131 15:44:07.346771 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:44:07 crc kubenswrapper[4735]: I0131 15:44:07.347654 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:44:07 crc kubenswrapper[4735]: I0131 15:44:07.347728 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d" gracePeriod=600 Jan 31 15:44:08 crc kubenswrapper[4735]: I0131 15:44:08.499116 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d" exitCode=0 Jan 31 15:44:08 crc kubenswrapper[4735]: I0131 15:44:08.499166 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d"} Jan 31 15:44:08 crc kubenswrapper[4735]: I0131 15:44:08.500100 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6"} Jan 31 15:44:08 crc kubenswrapper[4735]: I0131 15:44:08.500149 4735 scope.go:117] "RemoveContainer" containerID="44daaa980a4ec0c9ff6d3c7c522132df724e1691170ca876a6d44da15399a989" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.166800 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm"] Jan 31 15:45:00 crc kubenswrapper[4735]: E0131 15:45:00.168042 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="extract-content" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.168065 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="extract-content" Jan 31 15:45:00 crc kubenswrapper[4735]: E0131 15:45:00.168080 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="extract-utilities" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.168094 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="extract-utilities" Jan 31 15:45:00 crc kubenswrapper[4735]: E0131 15:45:00.168142 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="registry-server" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.168154 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="registry-server" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.168458 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b494b51-886a-43e0-adad-627fe241f87d" containerName="registry-server" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.169455 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.171749 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.173159 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.180471 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm"] Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.208096 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.208356 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2wx\" (UniqueName: \"kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.208658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.310839 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2wx\" (UniqueName: \"kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.310959 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.311055 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.313574 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.318507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.336796 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2wx\" (UniqueName: \"kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx\") pod \"collect-profiles-29497905-8mckm\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:00 crc kubenswrapper[4735]: I0131 15:45:00.516181 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:01 crc kubenswrapper[4735]: I0131 15:45:01.001064 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm"] Jan 31 15:45:01 crc kubenswrapper[4735]: W0131 15:45:01.008074 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce776119_6431_4eb6_918f_a60c99107de7.slice/crio-fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2 WatchSource:0}: Error finding container fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2: Status 404 returned error can't find the container with id fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2 Jan 31 15:45:01 crc kubenswrapper[4735]: I0131 15:45:01.044203 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" event={"ID":"ce776119-6431-4eb6-918f-a60c99107de7","Type":"ContainerStarted","Data":"fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2"} Jan 31 15:45:02 crc kubenswrapper[4735]: I0131 15:45:02.059232 4735 generic.go:334] "Generic (PLEG): container finished" podID="ce776119-6431-4eb6-918f-a60c99107de7" containerID="be4a96d5594fd204954c3e668aa9e4c69bcbbad97c736b004b484abb0642131c" exitCode=0 Jan 31 15:45:02 crc kubenswrapper[4735]: I0131 15:45:02.059366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" event={"ID":"ce776119-6431-4eb6-918f-a60c99107de7","Type":"ContainerDied","Data":"be4a96d5594fd204954c3e668aa9e4c69bcbbad97c736b004b484abb0642131c"} Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.516546 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.577918 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl2wx\" (UniqueName: \"kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx\") pod \"ce776119-6431-4eb6-918f-a60c99107de7\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.577982 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume\") pod \"ce776119-6431-4eb6-918f-a60c99107de7\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.578084 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume\") pod \"ce776119-6431-4eb6-918f-a60c99107de7\" (UID: \"ce776119-6431-4eb6-918f-a60c99107de7\") " Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.579051 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce776119-6431-4eb6-918f-a60c99107de7" (UID: "ce776119-6431-4eb6-918f-a60c99107de7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.594902 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx" (OuterVolumeSpecName: "kube-api-access-tl2wx") pod "ce776119-6431-4eb6-918f-a60c99107de7" (UID: "ce776119-6431-4eb6-918f-a60c99107de7"). InnerVolumeSpecName "kube-api-access-tl2wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.597356 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce776119-6431-4eb6-918f-a60c99107de7" (UID: "ce776119-6431-4eb6-918f-a60c99107de7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.680880 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce776119-6431-4eb6-918f-a60c99107de7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.680952 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl2wx\" (UniqueName: \"kubernetes.io/projected/ce776119-6431-4eb6-918f-a60c99107de7-kube-api-access-tl2wx\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:03 crc kubenswrapper[4735]: I0131 15:45:03.680972 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce776119-6431-4eb6-918f-a60c99107de7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:04 crc kubenswrapper[4735]: I0131 15:45:04.082515 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" event={"ID":"ce776119-6431-4eb6-918f-a60c99107de7","Type":"ContainerDied","Data":"fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2"} Jan 31 15:45:04 crc kubenswrapper[4735]: I0131 15:45:04.082806 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6c40e772a1edd638281ff1dcda44765bbea4d43c4a09c705af18d303bac1d2" Jan 31 15:45:04 crc kubenswrapper[4735]: I0131 15:45:04.082691 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497905-8mckm" Jan 31 15:45:04 crc kubenswrapper[4735]: I0131 15:45:04.592291 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf"] Jan 31 15:45:04 crc kubenswrapper[4735]: I0131 15:45:04.601852 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-vprcf"] Jan 31 15:45:05 crc kubenswrapper[4735]: I0131 15:45:05.556203 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22de0c73-39eb-46e4-aa76-3b0bb86e327b" path="/var/lib/kubelet/pods/22de0c73-39eb-46e4-aa76-3b0bb86e327b/volumes" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.533890 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:07 crc kubenswrapper[4735]: E0131 15:45:07.534695 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce776119-6431-4eb6-918f-a60c99107de7" containerName="collect-profiles" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.534713 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce776119-6431-4eb6-918f-a60c99107de7" containerName="collect-profiles" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.534973 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce776119-6431-4eb6-918f-a60c99107de7" containerName="collect-profiles" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.536644 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.566475 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kq9\" (UniqueName: \"kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.566909 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.568149 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.580145 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.670270 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.670348 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.670505 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kq9\" (UniqueName: \"kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.670967 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.671446 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.693482 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kq9\" (UniqueName: \"kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9\") pod \"redhat-operators-h59l9\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:07 crc kubenswrapper[4735]: I0131 15:45:07.869154 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:08 crc kubenswrapper[4735]: I0131 15:45:08.334364 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:09 crc kubenswrapper[4735]: I0131 15:45:09.136932 4735 generic.go:334] "Generic (PLEG): container finished" podID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerID="042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210" exitCode=0 Jan 31 15:45:09 crc kubenswrapper[4735]: I0131 15:45:09.137009 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerDied","Data":"042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210"} Jan 31 15:45:09 crc kubenswrapper[4735]: I0131 15:45:09.137200 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerStarted","Data":"8427ff2f89a0961f33c0e020c6a155f3758f520e28eca8e88c0721b31a588dd8"} Jan 31 15:45:09 crc kubenswrapper[4735]: I0131 15:45:09.139944 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:45:11 crc kubenswrapper[4735]: I0131 15:45:11.157143 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerStarted","Data":"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834"} Jan 31 15:45:13 crc kubenswrapper[4735]: I0131 15:45:13.181530 4735 generic.go:334] "Generic (PLEG): container finished" podID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerID="59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834" exitCode=0 Jan 31 15:45:13 crc kubenswrapper[4735]: I0131 15:45:13.181627 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerDied","Data":"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834"} Jan 31 15:45:14 crc kubenswrapper[4735]: I0131 15:45:14.195691 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerStarted","Data":"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0"} Jan 31 15:45:14 crc kubenswrapper[4735]: I0131 15:45:14.222499 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h59l9" podStartSLOduration=2.637421747 podStartE2EDuration="7.222470222s" podCreationTimestamp="2026-01-31 15:45:07 +0000 UTC" firstStartedPulling="2026-01-31 15:45:09.139397798 +0000 UTC m=+2794.912726880" lastFinishedPulling="2026-01-31 15:45:13.724446313 +0000 UTC m=+2799.497775355" observedRunningTime="2026-01-31 15:45:14.215964998 +0000 UTC m=+2799.989294080" watchObservedRunningTime="2026-01-31 15:45:14.222470222 +0000 UTC m=+2799.995799304" Jan 31 15:45:17 crc kubenswrapper[4735]: I0131 15:45:17.870175 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:17 crc kubenswrapper[4735]: I0131 15:45:17.870713 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:18 crc kubenswrapper[4735]: I0131 15:45:18.940098 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h59l9" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="registry-server" probeResult="failure" output=< Jan 31 15:45:18 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:45:18 crc kubenswrapper[4735]: > Jan 31 15:45:27 crc kubenswrapper[4735]: I0131 15:45:27.939506 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:28 crc kubenswrapper[4735]: I0131 15:45:28.004151 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:28 crc kubenswrapper[4735]: I0131 15:45:28.192760 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.349926 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h59l9" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="registry-server" containerID="cri-o://ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0" gracePeriod=2 Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.883648 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.975490 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities\") pod \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.975542 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content\") pod \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.975572 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56kq9\" (UniqueName: \"kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9\") pod \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\" (UID: \"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a\") " Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.976751 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities" (OuterVolumeSpecName: "utilities") pod "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" (UID: "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:45:29 crc kubenswrapper[4735]: I0131 15:45:29.988653 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9" (OuterVolumeSpecName: "kube-api-access-56kq9") pod "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" (UID: "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a"). InnerVolumeSpecName "kube-api-access-56kq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.078516 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.078795 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56kq9\" (UniqueName: \"kubernetes.io/projected/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-kube-api-access-56kq9\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.102887 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" (UID: "9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.181653 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.361513 4735 generic.go:334] "Generic (PLEG): container finished" podID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerID="ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0" exitCode=0 Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.361565 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerDied","Data":"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0"} Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.361644 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59l9" event={"ID":"9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a","Type":"ContainerDied","Data":"8427ff2f89a0961f33c0e020c6a155f3758f520e28eca8e88c0721b31a588dd8"} Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.361666 4735 scope.go:117] "RemoveContainer" containerID="ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.362931 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59l9" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.406874 4735 scope.go:117] "RemoveContainer" containerID="59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.415749 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.437241 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h59l9"] Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.443081 4735 scope.go:117] "RemoveContainer" containerID="042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.480744 4735 scope.go:117] "RemoveContainer" containerID="ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0" Jan 31 15:45:30 crc kubenswrapper[4735]: E0131 15:45:30.482523 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0\": container with ID starting with ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0 not found: ID does not exist" containerID="ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.482569 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0"} err="failed to get container status \"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0\": rpc error: code = NotFound desc = could not find container \"ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0\": container with ID starting with ae1faa1dd037a8d83843205a1ef0f32f1538b5425255ac5337d1d83b92e6ebc0 not found: ID does not exist" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.482597 4735 scope.go:117] "RemoveContainer" containerID="59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834" Jan 31 15:45:30 crc kubenswrapper[4735]: E0131 15:45:30.482909 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834\": container with ID starting with 59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834 not found: ID does not exist" containerID="59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.482932 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834"} err="failed to get container status \"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834\": rpc error: code = NotFound desc = could not find container \"59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834\": container with ID starting with 59e41d40c6f75a4a5b1f230d1ac3d3d80b20105d634738fd445c2b4157373834 not found: ID does not exist" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.482947 4735 scope.go:117] "RemoveContainer" containerID="042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210" Jan 31 15:45:30 crc kubenswrapper[4735]: E0131 15:45:30.483706 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210\": container with ID starting with 042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210 not found: ID does not exist" containerID="042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210" Jan 31 15:45:30 crc kubenswrapper[4735]: I0131 15:45:30.483729 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210"} err="failed to get container status \"042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210\": rpc error: code = NotFound desc = could not find container \"042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210\": container with ID starting with 042a77f1330d2617926c7909912461e2ef9cc3fd2e65232f42ab48bc7e8a0210 not found: ID does not exist" Jan 31 15:45:31 crc kubenswrapper[4735]: I0131 15:45:31.553097 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" path="/var/lib/kubelet/pods/9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a/volumes" Jan 31 15:45:41 crc kubenswrapper[4735]: I0131 15:45:41.478214 4735 scope.go:117] "RemoveContainer" containerID="f4bc9601e952d12489d33e2899f4c4757280f45f232cf7e30ed696a6d0ad1642" Jan 31 15:46:07 crc kubenswrapper[4735]: I0131 15:46:07.346022 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:46:07 crc kubenswrapper[4735]: I0131 15:46:07.346706 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:46:37 crc kubenswrapper[4735]: I0131 15:46:37.346840 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:46:37 crc kubenswrapper[4735]: I0131 15:46:37.347631 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.021035 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:46:56 crc kubenswrapper[4735]: E0131 15:46:56.022216 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="extract-content" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.022239 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="extract-content" Jan 31 15:46:56 crc kubenswrapper[4735]: E0131 15:46:56.022267 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="registry-server" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.022280 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="registry-server" Jan 31 15:46:56 crc kubenswrapper[4735]: E0131 15:46:56.022303 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="extract-utilities" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.022316 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="extract-utilities" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.022747 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c985e7d-b6b6-42e7-9e51-d9d5587fdd0a" containerName="registry-server" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.025231 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.032870 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.082357 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.082402 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.082459 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4j8\" (UniqueName: \"kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.184869 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw4j8\" (UniqueName: \"kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.185141 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.185191 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.185870 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.185919 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.216617 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw4j8\" (UniqueName: \"kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8\") pod \"certified-operators-pl5h4\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.384979 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:46:56 crc kubenswrapper[4735]: I0131 15:46:56.899034 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:46:57 crc kubenswrapper[4735]: I0131 15:46:57.284492 4735 generic.go:334] "Generic (PLEG): container finished" podID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerID="a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f" exitCode=0 Jan 31 15:46:57 crc kubenswrapper[4735]: I0131 15:46:57.284603 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerDied","Data":"a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f"} Jan 31 15:46:57 crc kubenswrapper[4735]: I0131 15:46:57.284886 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerStarted","Data":"06d108a19f292edfaec08a16740abae3b7522f2b24ba290eea5b9389fa9218ca"} Jan 31 15:46:58 crc kubenswrapper[4735]: I0131 15:46:58.294942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerStarted","Data":"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db"} Jan 31 15:46:59 crc kubenswrapper[4735]: I0131 15:46:59.312134 4735 generic.go:334] "Generic (PLEG): container finished" podID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerID="748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db" exitCode=0 Jan 31 15:46:59 crc kubenswrapper[4735]: I0131 15:46:59.312266 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerDied","Data":"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db"} Jan 31 15:47:00 crc kubenswrapper[4735]: I0131 15:47:00.325215 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerStarted","Data":"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6"} Jan 31 15:47:00 crc kubenswrapper[4735]: I0131 15:47:00.369581 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pl5h4" podStartSLOduration=1.8931404550000002 podStartE2EDuration="4.369547735s" podCreationTimestamp="2026-01-31 15:46:56 +0000 UTC" firstStartedPulling="2026-01-31 15:46:57.287252386 +0000 UTC m=+2903.060581428" lastFinishedPulling="2026-01-31 15:46:59.763659626 +0000 UTC m=+2905.536988708" observedRunningTime="2026-01-31 15:47:00.34991488 +0000 UTC m=+2906.123243942" watchObservedRunningTime="2026-01-31 15:47:00.369547735 +0000 UTC m=+2906.142876798" Jan 31 15:47:06 crc kubenswrapper[4735]: I0131 15:47:06.385134 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:06 crc kubenswrapper[4735]: I0131 15:47:06.385569 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:06 crc kubenswrapper[4735]: I0131 15:47:06.434874 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.346192 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.346251 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.346298 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.347818 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.347974 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" gracePeriod=600 Jan 31 15:47:07 crc kubenswrapper[4735]: E0131 15:47:07.482275 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.486817 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:07 crc kubenswrapper[4735]: I0131 15:47:07.532816 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:47:08 crc kubenswrapper[4735]: I0131 15:47:08.414354 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" exitCode=0 Jan 31 15:47:08 crc kubenswrapper[4735]: I0131 15:47:08.414469 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6"} Jan 31 15:47:08 crc kubenswrapper[4735]: I0131 15:47:08.414542 4735 scope.go:117] "RemoveContainer" containerID="641c11e50162de0f9aeb7806e33a345f6e9eaa77e1303a6a782a72914ffc661d" Jan 31 15:47:08 crc kubenswrapper[4735]: I0131 15:47:08.415244 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:47:08 crc kubenswrapper[4735]: E0131 15:47:08.415586 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:47:09 crc kubenswrapper[4735]: I0131 15:47:09.431644 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pl5h4" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="registry-server" containerID="cri-o://2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6" gracePeriod=2 Jan 31 15:47:09 crc kubenswrapper[4735]: I0131 15:47:09.901704 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.000165 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content\") pod \"52b6b16d-6817-4cb9-aed1-b623e869dc53\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.000219 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw4j8\" (UniqueName: \"kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8\") pod \"52b6b16d-6817-4cb9-aed1-b623e869dc53\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.000284 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities\") pod \"52b6b16d-6817-4cb9-aed1-b623e869dc53\" (UID: \"52b6b16d-6817-4cb9-aed1-b623e869dc53\") " Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.001429 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities" (OuterVolumeSpecName: "utilities") pod "52b6b16d-6817-4cb9-aed1-b623e869dc53" (UID: "52b6b16d-6817-4cb9-aed1-b623e869dc53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.005743 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8" (OuterVolumeSpecName: "kube-api-access-nw4j8") pod "52b6b16d-6817-4cb9-aed1-b623e869dc53" (UID: "52b6b16d-6817-4cb9-aed1-b623e869dc53"). InnerVolumeSpecName "kube-api-access-nw4j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.043275 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52b6b16d-6817-4cb9-aed1-b623e869dc53" (UID: "52b6b16d-6817-4cb9-aed1-b623e869dc53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.102814 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw4j8\" (UniqueName: \"kubernetes.io/projected/52b6b16d-6817-4cb9-aed1-b623e869dc53-kube-api-access-nw4j8\") on node \"crc\" DevicePath \"\"" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.102859 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.102874 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b6b16d-6817-4cb9-aed1-b623e869dc53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.443047 4735 generic.go:334] "Generic (PLEG): container finished" podID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerID="2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6" exitCode=0 Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.443114 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerDied","Data":"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6"} Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.443379 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl5h4" event={"ID":"52b6b16d-6817-4cb9-aed1-b623e869dc53","Type":"ContainerDied","Data":"06d108a19f292edfaec08a16740abae3b7522f2b24ba290eea5b9389fa9218ca"} Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.443404 4735 scope.go:117] "RemoveContainer" containerID="2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.443146 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl5h4" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.471211 4735 scope.go:117] "RemoveContainer" containerID="748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.505291 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.513327 4735 scope.go:117] "RemoveContainer" containerID="a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.517683 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pl5h4"] Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.560200 4735 scope.go:117] "RemoveContainer" containerID="2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6" Jan 31 15:47:10 crc kubenswrapper[4735]: E0131 15:47:10.560864 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6\": container with ID starting with 2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6 not found: ID does not exist" containerID="2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.560903 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6"} err="failed to get container status \"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6\": rpc error: code = NotFound desc = could not find container \"2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6\": container with ID starting with 2746221ff9333352f27f51f560c885eaef0bd1195bd67ab7d902365907a81cb6 not found: ID does not exist" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.560931 4735 scope.go:117] "RemoveContainer" containerID="748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db" Jan 31 15:47:10 crc kubenswrapper[4735]: E0131 15:47:10.561186 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db\": container with ID starting with 748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db not found: ID does not exist" containerID="748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.561221 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db"} err="failed to get container status \"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db\": rpc error: code = NotFound desc = could not find container \"748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db\": container with ID starting with 748b8cf9a8dc1190dcf53555d07392a21a97b5e7df9619f7ded37254a8a096db not found: ID does not exist" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.561239 4735 scope.go:117] "RemoveContainer" containerID="a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f" Jan 31 15:47:10 crc kubenswrapper[4735]: E0131 15:47:10.561655 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f\": container with ID starting with a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f not found: ID does not exist" containerID="a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f" Jan 31 15:47:10 crc kubenswrapper[4735]: I0131 15:47:10.561680 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f"} err="failed to get container status \"a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f\": rpc error: code = NotFound desc = could not find container \"a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f\": container with ID starting with a60243ab4278fe374a9775764b86cc9ace42c0912f342bbdf89c22943c638c4f not found: ID does not exist" Jan 31 15:47:11 crc kubenswrapper[4735]: I0131 15:47:11.561887 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" path="/var/lib/kubelet/pods/52b6b16d-6817-4cb9-aed1-b623e869dc53/volumes" Jan 31 15:47:23 crc kubenswrapper[4735]: I0131 15:47:23.541575 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:47:23 crc kubenswrapper[4735]: E0131 15:47:23.542679 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:47:38 crc kubenswrapper[4735]: I0131 15:47:38.540615 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:47:38 crc kubenswrapper[4735]: E0131 15:47:38.541538 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:47:51 crc kubenswrapper[4735]: I0131 15:47:51.541123 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:47:51 crc kubenswrapper[4735]: E0131 15:47:51.542375 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:48:05 crc kubenswrapper[4735]: I0131 15:48:05.553477 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:48:05 crc kubenswrapper[4735]: E0131 15:48:05.555094 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:48:19 crc kubenswrapper[4735]: I0131 15:48:19.540956 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:48:19 crc kubenswrapper[4735]: E0131 15:48:19.542098 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:48:34 crc kubenswrapper[4735]: I0131 15:48:34.540046 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:48:34 crc kubenswrapper[4735]: E0131 15:48:34.540959 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:48:49 crc kubenswrapper[4735]: I0131 15:48:49.540621 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:48:49 crc kubenswrapper[4735]: E0131 15:48:49.541840 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:49:01 crc kubenswrapper[4735]: I0131 15:49:01.540932 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:49:01 crc kubenswrapper[4735]: E0131 15:49:01.542657 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:49:16 crc kubenswrapper[4735]: I0131 15:49:16.540892 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:49:16 crc kubenswrapper[4735]: E0131 15:49:16.542230 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.150462 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:17 crc kubenswrapper[4735]: E0131 15:49:17.151373 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="registry-server" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.151396 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="registry-server" Jan 31 15:49:17 crc kubenswrapper[4735]: E0131 15:49:17.151439 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="extract-content" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.151447 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="extract-content" Jan 31 15:49:17 crc kubenswrapper[4735]: E0131 15:49:17.151458 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="extract-utilities" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.151520 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="extract-utilities" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.151962 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b6b16d-6817-4cb9-aed1-b623e869dc53" containerName="registry-server" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.154066 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.181788 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.246595 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.246868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.246910 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlc9\" (UniqueName: \"kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.349599 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.350023 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.350168 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlc9\" (UniqueName: \"kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.350259 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.350070 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.373409 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlc9\" (UniqueName: \"kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9\") pod \"community-operators-2t9gv\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:17 crc kubenswrapper[4735]: I0131 15:49:17.483378 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:18 crc kubenswrapper[4735]: I0131 15:49:18.094862 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:18 crc kubenswrapper[4735]: I0131 15:49:18.177337 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerStarted","Data":"1b9bee022200740bca342782ebf0758688c35a6235caa776507e16dc8b32efec"} Jan 31 15:49:19 crc kubenswrapper[4735]: I0131 15:49:19.191777 4735 generic.go:334] "Generic (PLEG): container finished" podID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerID="b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151" exitCode=0 Jan 31 15:49:19 crc kubenswrapper[4735]: I0131 15:49:19.191852 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerDied","Data":"b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151"} Jan 31 15:49:20 crc kubenswrapper[4735]: I0131 15:49:20.205288 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerStarted","Data":"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411"} Jan 31 15:49:21 crc kubenswrapper[4735]: I0131 15:49:21.220559 4735 generic.go:334] "Generic (PLEG): container finished" podID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerID="e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411" exitCode=0 Jan 31 15:49:21 crc kubenswrapper[4735]: I0131 15:49:21.220641 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerDied","Data":"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411"} Jan 31 15:49:22 crc kubenswrapper[4735]: I0131 15:49:22.232267 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerStarted","Data":"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee"} Jan 31 15:49:22 crc kubenswrapper[4735]: I0131 15:49:22.259511 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2t9gv" podStartSLOduration=2.839722016 podStartE2EDuration="5.259487459s" podCreationTimestamp="2026-01-31 15:49:17 +0000 UTC" firstStartedPulling="2026-01-31 15:49:19.195771886 +0000 UTC m=+3044.969100968" lastFinishedPulling="2026-01-31 15:49:21.615537329 +0000 UTC m=+3047.388866411" observedRunningTime="2026-01-31 15:49:22.253586033 +0000 UTC m=+3048.026915075" watchObservedRunningTime="2026-01-31 15:49:22.259487459 +0000 UTC m=+3048.032816531" Jan 31 15:49:27 crc kubenswrapper[4735]: I0131 15:49:27.483606 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:27 crc kubenswrapper[4735]: I0131 15:49:27.484224 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:27 crc kubenswrapper[4735]: I0131 15:49:27.540244 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:49:27 crc kubenswrapper[4735]: E0131 15:49:27.540565 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:49:27 crc kubenswrapper[4735]: I0131 15:49:27.556009 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:28 crc kubenswrapper[4735]: I0131 15:49:28.391814 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:28 crc kubenswrapper[4735]: I0131 15:49:28.497628 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:30 crc kubenswrapper[4735]: I0131 15:49:30.331579 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2t9gv" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="registry-server" containerID="cri-o://b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee" gracePeriod=2 Jan 31 15:49:30 crc kubenswrapper[4735]: I0131 15:49:30.960857 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.067013 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content\") pod \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.067136 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities\") pod \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.067261 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dlc9\" (UniqueName: \"kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9\") pod \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\" (UID: \"43d2ace9-84a1-4f79-ba45-6e3f39c8125e\") " Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.068213 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities" (OuterVolumeSpecName: "utilities") pod "43d2ace9-84a1-4f79-ba45-6e3f39c8125e" (UID: "43d2ace9-84a1-4f79-ba45-6e3f39c8125e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.078722 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9" (OuterVolumeSpecName: "kube-api-access-8dlc9") pod "43d2ace9-84a1-4f79-ba45-6e3f39c8125e" (UID: "43d2ace9-84a1-4f79-ba45-6e3f39c8125e"). InnerVolumeSpecName "kube-api-access-8dlc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.130227 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43d2ace9-84a1-4f79-ba45-6e3f39c8125e" (UID: "43d2ace9-84a1-4f79-ba45-6e3f39c8125e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.170073 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.170124 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.170136 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dlc9\" (UniqueName: \"kubernetes.io/projected/43d2ace9-84a1-4f79-ba45-6e3f39c8125e-kube-api-access-8dlc9\") on node \"crc\" DevicePath \"\"" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.344092 4735 generic.go:334] "Generic (PLEG): container finished" podID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerID="b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee" exitCode=0 Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.344170 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerDied","Data":"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee"} Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.344183 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t9gv" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.344230 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t9gv" event={"ID":"43d2ace9-84a1-4f79-ba45-6e3f39c8125e","Type":"ContainerDied","Data":"1b9bee022200740bca342782ebf0758688c35a6235caa776507e16dc8b32efec"} Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.344271 4735 scope.go:117] "RemoveContainer" containerID="b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.381059 4735 scope.go:117] "RemoveContainer" containerID="e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.411913 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.419648 4735 scope.go:117] "RemoveContainer" containerID="b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.426306 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2t9gv"] Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.469392 4735 scope.go:117] "RemoveContainer" containerID="b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee" Jan 31 15:49:31 crc kubenswrapper[4735]: E0131 15:49:31.469965 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee\": container with ID starting with b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee not found: ID does not exist" containerID="b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.470024 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee"} err="failed to get container status \"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee\": rpc error: code = NotFound desc = could not find container \"b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee\": container with ID starting with b3d57401b7bb3660a1067b6c3f80e8cca62475bdb475311c9f55e7c4ecf222ee not found: ID does not exist" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.470059 4735 scope.go:117] "RemoveContainer" containerID="e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411" Jan 31 15:49:31 crc kubenswrapper[4735]: E0131 15:49:31.470705 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411\": container with ID starting with e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411 not found: ID does not exist" containerID="e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.470745 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411"} err="failed to get container status \"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411\": rpc error: code = NotFound desc = could not find container \"e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411\": container with ID starting with e90717f64a93e7cb21ccd67c7250938e585ecb1589099044818108a8c9777411 not found: ID does not exist" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.470772 4735 scope.go:117] "RemoveContainer" containerID="b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151" Jan 31 15:49:31 crc kubenswrapper[4735]: E0131 15:49:31.471195 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151\": container with ID starting with b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151 not found: ID does not exist" containerID="b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.471234 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151"} err="failed to get container status \"b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151\": rpc error: code = NotFound desc = could not find container \"b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151\": container with ID starting with b9d6ff58ceea229774a8d20b7b6a8d834e8a55d2415281713a6207fc0475a151 not found: ID does not exist" Jan 31 15:49:31 crc kubenswrapper[4735]: I0131 15:49:31.555837 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" path="/var/lib/kubelet/pods/43d2ace9-84a1-4f79-ba45-6e3f39c8125e/volumes" Jan 31 15:49:40 crc kubenswrapper[4735]: I0131 15:49:40.540077 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:49:40 crc kubenswrapper[4735]: E0131 15:49:40.541134 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:49:52 crc kubenswrapper[4735]: I0131 15:49:52.540003 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:49:52 crc kubenswrapper[4735]: E0131 15:49:52.541039 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:06 crc kubenswrapper[4735]: I0131 15:50:06.540636 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:50:06 crc kubenswrapper[4735]: E0131 15:50:06.541680 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:18 crc kubenswrapper[4735]: I0131 15:50:18.552295 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:50:18 crc kubenswrapper[4735]: E0131 15:50:18.554136 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:31 crc kubenswrapper[4735]: I0131 15:50:31.540159 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:50:31 crc kubenswrapper[4735]: E0131 15:50:31.541017 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:44 crc kubenswrapper[4735]: I0131 15:50:44.540942 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:50:44 crc kubenswrapper[4735]: E0131 15:50:44.542131 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.293491 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:50:48 crc kubenswrapper[4735]: E0131 15:50:48.294679 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="extract-content" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.294702 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="extract-content" Jan 31 15:50:48 crc kubenswrapper[4735]: E0131 15:50:48.294728 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="registry-server" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.294740 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="registry-server" Jan 31 15:50:48 crc kubenswrapper[4735]: E0131 15:50:48.294768 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="extract-utilities" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.294808 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="extract-utilities" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.295116 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d2ace9-84a1-4f79-ba45-6e3f39c8125e" containerName="registry-server" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.297179 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.311813 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.360446 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.360597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.360755 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6rn\" (UniqueName: \"kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.462913 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.463020 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6rn\" (UniqueName: \"kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.463100 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.463538 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.463686 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.482712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6rn\" (UniqueName: \"kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn\") pod \"redhat-marketplace-pf5xs\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:48 crc kubenswrapper[4735]: I0131 15:50:48.621912 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:49 crc kubenswrapper[4735]: I0131 15:50:49.103600 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:50:49 crc kubenswrapper[4735]: I0131 15:50:49.134301 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerStarted","Data":"d18bf95a68e43bfc41daed83e6a0995731f37c61c8f382c3a0cc59e2e8a0640e"} Jan 31 15:50:50 crc kubenswrapper[4735]: I0131 15:50:50.159090 4735 generic.go:334] "Generic (PLEG): container finished" podID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerID="90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501" exitCode=0 Jan 31 15:50:50 crc kubenswrapper[4735]: I0131 15:50:50.159181 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerDied","Data":"90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501"} Jan 31 15:50:50 crc kubenswrapper[4735]: I0131 15:50:50.163718 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:50:52 crc kubenswrapper[4735]: I0131 15:50:52.186997 4735 generic.go:334] "Generic (PLEG): container finished" podID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerID="854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb" exitCode=0 Jan 31 15:50:52 crc kubenswrapper[4735]: I0131 15:50:52.187168 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerDied","Data":"854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb"} Jan 31 15:50:53 crc kubenswrapper[4735]: I0131 15:50:53.200815 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerStarted","Data":"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479"} Jan 31 15:50:53 crc kubenswrapper[4735]: I0131 15:50:53.242577 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pf5xs" podStartSLOduration=2.704014729 podStartE2EDuration="5.242558379s" podCreationTimestamp="2026-01-31 15:50:48 +0000 UTC" firstStartedPulling="2026-01-31 15:50:50.163171007 +0000 UTC m=+3135.936500089" lastFinishedPulling="2026-01-31 15:50:52.701714687 +0000 UTC m=+3138.475043739" observedRunningTime="2026-01-31 15:50:53.23511752 +0000 UTC m=+3139.008446592" watchObservedRunningTime="2026-01-31 15:50:53.242558379 +0000 UTC m=+3139.015887421" Jan 31 15:50:57 crc kubenswrapper[4735]: I0131 15:50:57.540056 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:50:57 crc kubenswrapper[4735]: E0131 15:50:57.541200 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:50:58 crc kubenswrapper[4735]: I0131 15:50:58.622622 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:58 crc kubenswrapper[4735]: I0131 15:50:58.623027 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:58 crc kubenswrapper[4735]: I0131 15:50:58.697785 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:59 crc kubenswrapper[4735]: I0131 15:50:59.339748 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:50:59 crc kubenswrapper[4735]: I0131 15:50:59.472826 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.294067 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pf5xs" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="registry-server" containerID="cri-o://3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479" gracePeriod=2 Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.825931 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.908101 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content\") pod \"4aae9996-db64-448a-9fdb-d3cd47ac785b\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.908395 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities\") pod \"4aae9996-db64-448a-9fdb-d3cd47ac785b\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.908472 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl6rn\" (UniqueName: \"kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn\") pod \"4aae9996-db64-448a-9fdb-d3cd47ac785b\" (UID: \"4aae9996-db64-448a-9fdb-d3cd47ac785b\") " Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.910253 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities" (OuterVolumeSpecName: "utilities") pod "4aae9996-db64-448a-9fdb-d3cd47ac785b" (UID: "4aae9996-db64-448a-9fdb-d3cd47ac785b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.932986 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn" (OuterVolumeSpecName: "kube-api-access-pl6rn") pod "4aae9996-db64-448a-9fdb-d3cd47ac785b" (UID: "4aae9996-db64-448a-9fdb-d3cd47ac785b"). InnerVolumeSpecName "kube-api-access-pl6rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:51:01 crc kubenswrapper[4735]: I0131 15:51:01.992614 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aae9996-db64-448a-9fdb-d3cd47ac785b" (UID: "4aae9996-db64-448a-9fdb-d3cd47ac785b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.010137 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.010165 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aae9996-db64-448a-9fdb-d3cd47ac785b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.010175 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl6rn\" (UniqueName: \"kubernetes.io/projected/4aae9996-db64-448a-9fdb-d3cd47ac785b-kube-api-access-pl6rn\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.308657 4735 generic.go:334] "Generic (PLEG): container finished" podID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerID="3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479" exitCode=0 Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.308706 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerDied","Data":"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479"} Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.308737 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pf5xs" event={"ID":"4aae9996-db64-448a-9fdb-d3cd47ac785b","Type":"ContainerDied","Data":"d18bf95a68e43bfc41daed83e6a0995731f37c61c8f382c3a0cc59e2e8a0640e"} Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.308761 4735 scope.go:117] "RemoveContainer" containerID="3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.308768 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pf5xs" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.344550 4735 scope.go:117] "RemoveContainer" containerID="854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.371125 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.377089 4735 scope.go:117] "RemoveContainer" containerID="90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.381494 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pf5xs"] Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.422336 4735 scope.go:117] "RemoveContainer" containerID="3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479" Jan 31 15:51:02 crc kubenswrapper[4735]: E0131 15:51:02.422789 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479\": container with ID starting with 3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479 not found: ID does not exist" containerID="3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.422842 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479"} err="failed to get container status \"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479\": rpc error: code = NotFound desc = could not find container \"3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479\": container with ID starting with 3bccca7c3fa3261bd487579adecc7053f56d04ea2873800294056eb3b012c479 not found: ID does not exist" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.422874 4735 scope.go:117] "RemoveContainer" containerID="854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb" Jan 31 15:51:02 crc kubenswrapper[4735]: E0131 15:51:02.423176 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb\": container with ID starting with 854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb not found: ID does not exist" containerID="854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.423211 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb"} err="failed to get container status \"854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb\": rpc error: code = NotFound desc = could not find container \"854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb\": container with ID starting with 854de296cde3b5e38a158eb37736dab2a780a89f25773bdeacff21610dea52bb not found: ID does not exist" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.423239 4735 scope.go:117] "RemoveContainer" containerID="90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501" Jan 31 15:51:02 crc kubenswrapper[4735]: E0131 15:51:02.423547 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501\": container with ID starting with 90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501 not found: ID does not exist" containerID="90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501" Jan 31 15:51:02 crc kubenswrapper[4735]: I0131 15:51:02.423665 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501"} err="failed to get container status \"90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501\": rpc error: code = NotFound desc = could not find container \"90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501\": container with ID starting with 90835d4e1841200858b7f6b76a8861453c472f41798cea8665308119b0a88501 not found: ID does not exist" Jan 31 15:51:03 crc kubenswrapper[4735]: I0131 15:51:03.559856 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" path="/var/lib/kubelet/pods/4aae9996-db64-448a-9fdb-d3cd47ac785b/volumes" Jan 31 15:51:11 crc kubenswrapper[4735]: I0131 15:51:11.540737 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:51:11 crc kubenswrapper[4735]: E0131 15:51:11.541297 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:51:24 crc kubenswrapper[4735]: I0131 15:51:24.540843 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:51:24 crc kubenswrapper[4735]: E0131 15:51:24.541732 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:51:25 crc kubenswrapper[4735]: E0131 15:51:25.796489 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc05b7f_c3c5_4fd8_807e_eb2d83710f4f.slice/crio-conmon-7c26d91435329394ca3d916d45115754bf7d34122a0687fc85c331ef6e90a7b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc05b7f_c3c5_4fd8_807e_eb2d83710f4f.slice/crio-7c26d91435329394ca3d916d45115754bf7d34122a0687fc85c331ef6e90a7b0.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:51:26 crc kubenswrapper[4735]: I0131 15:51:26.566347 4735 generic.go:334] "Generic (PLEG): container finished" podID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" containerID="7c26d91435329394ca3d916d45115754bf7d34122a0687fc85c331ef6e90a7b0" exitCode=0 Jan 31 15:51:26 crc kubenswrapper[4735]: I0131 15:51:26.566377 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f","Type":"ContainerDied","Data":"7c26d91435329394ca3d916d45115754bf7d34122a0687fc85c331ef6e90a7b0"} Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.954298 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988623 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988748 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988787 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988844 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988885 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.988959 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.989028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.989056 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.989091 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mzr7\" (UniqueName: \"kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7\") pod \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\" (UID: \"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f\") " Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.992006 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.992415 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data" (OuterVolumeSpecName: "config-data") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.996487 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7" (OuterVolumeSpecName: "kube-api-access-8mzr7") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "kube-api-access-8mzr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:51:27 crc kubenswrapper[4735]: I0131 15:51:27.996606 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.000823 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.029641 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.035528 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.038288 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.065286 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" (UID: "3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.091928 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mzr7\" (UniqueName: \"kubernetes.io/projected/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-kube-api-access-8mzr7\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.091970 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.091984 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.091998 4735 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.092011 4735 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.092023 4735 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.092035 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.092049 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.092210 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.117620 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.194740 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.589601 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f","Type":"ContainerDied","Data":"04a16c258f4f15e3937df05ddda265976b9e2cdcf943542616977c3baf8dd5e7"} Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.589638 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a16c258f4f15e3937df05ddda265976b9e2cdcf943542616977c3baf8dd5e7" Jan 31 15:51:28 crc kubenswrapper[4735]: I0131 15:51:28.589653 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.727331 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 15:51:32 crc kubenswrapper[4735]: E0131 15:51:32.728246 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="extract-utilities" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728262 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="extract-utilities" Jan 31 15:51:32 crc kubenswrapper[4735]: E0131 15:51:32.728295 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" containerName="tempest-tests-tempest-tests-runner" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728303 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" containerName="tempest-tests-tempest-tests-runner" Jan 31 15:51:32 crc kubenswrapper[4735]: E0131 15:51:32.728314 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="extract-content" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728321 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="extract-content" Jan 31 15:51:32 crc kubenswrapper[4735]: E0131 15:51:32.728336 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="registry-server" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728343 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="registry-server" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728560 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aae9996-db64-448a-9fdb-d3cd47ac785b" containerName="registry-server" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.728587 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f" containerName="tempest-tests-tempest-tests-runner" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.729246 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.732462 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wk4bl" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.750214 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.796370 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.796503 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpgfz\" (UniqueName: \"kubernetes.io/projected/9bcab996-865d-4021-bd63-7e17f6093145-kube-api-access-mpgfz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.898413 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.898536 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpgfz\" (UniqueName: \"kubernetes.io/projected/9bcab996-865d-4021-bd63-7e17f6093145-kube-api-access-mpgfz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.898905 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.924284 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpgfz\" (UniqueName: \"kubernetes.io/projected/9bcab996-865d-4021-bd63-7e17f6093145-kube-api-access-mpgfz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:32 crc kubenswrapper[4735]: I0131 15:51:32.931529 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9bcab996-865d-4021-bd63-7e17f6093145\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:33 crc kubenswrapper[4735]: I0131 15:51:33.075363 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 15:51:33 crc kubenswrapper[4735]: I0131 15:51:33.593777 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 15:51:33 crc kubenswrapper[4735]: W0131 15:51:33.603929 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bcab996_865d_4021_bd63_7e17f6093145.slice/crio-52eb1d322bfea7ddf615c4aae737eaee90868f7dcc7d949e0c26ac3fa217cf07 WatchSource:0}: Error finding container 52eb1d322bfea7ddf615c4aae737eaee90868f7dcc7d949e0c26ac3fa217cf07: Status 404 returned error can't find the container with id 52eb1d322bfea7ddf615c4aae737eaee90868f7dcc7d949e0c26ac3fa217cf07 Jan 31 15:51:33 crc kubenswrapper[4735]: I0131 15:51:33.643514 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9bcab996-865d-4021-bd63-7e17f6093145","Type":"ContainerStarted","Data":"52eb1d322bfea7ddf615c4aae737eaee90868f7dcc7d949e0c26ac3fa217cf07"} Jan 31 15:51:35 crc kubenswrapper[4735]: I0131 15:51:35.667660 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9bcab996-865d-4021-bd63-7e17f6093145","Type":"ContainerStarted","Data":"303408c006bebc98a7dc8c7c295b52410477cf6f4927e34cd9b5b72d4cfc4e0b"} Jan 31 15:51:35 crc kubenswrapper[4735]: I0131 15:51:35.705241 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.8271070849999997 podStartE2EDuration="3.7052126s" podCreationTimestamp="2026-01-31 15:51:32 +0000 UTC" firstStartedPulling="2026-01-31 15:51:33.607563838 +0000 UTC m=+3179.380892880" lastFinishedPulling="2026-01-31 15:51:34.485669353 +0000 UTC m=+3180.258998395" observedRunningTime="2026-01-31 15:51:35.684706203 +0000 UTC m=+3181.458035305" watchObservedRunningTime="2026-01-31 15:51:35.7052126 +0000 UTC m=+3181.478541672" Jan 31 15:51:37 crc kubenswrapper[4735]: I0131 15:51:37.539629 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:51:37 crc kubenswrapper[4735]: E0131 15:51:37.541242 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:51:50 crc kubenswrapper[4735]: I0131 15:51:50.539858 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:51:50 crc kubenswrapper[4735]: E0131 15:51:50.540810 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.202410 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jq2z/must-gather-bxn59"] Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.204530 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.207789 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9jq2z"/"kube-root-ca.crt" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.207942 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9jq2z"/"openshift-service-ca.crt" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.208523 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9jq2z"/"default-dockercfg-m8vg7" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.274462 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9jq2z/must-gather-bxn59"] Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.299659 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8v6l\" (UniqueName: \"kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.300098 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.401718 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.401857 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8v6l\" (UniqueName: \"kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.402200 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.430609 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8v6l\" (UniqueName: \"kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l\") pod \"must-gather-bxn59\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:56 crc kubenswrapper[4735]: I0131 15:51:56.523883 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:51:57 crc kubenswrapper[4735]: I0131 15:51:56.999734 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9jq2z/must-gather-bxn59"] Jan 31 15:51:57 crc kubenswrapper[4735]: I0131 15:51:57.894948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/must-gather-bxn59" event={"ID":"fc3bde80-85c4-436f-8741-d8c1248bdec8","Type":"ContainerStarted","Data":"ba43dc54af88b02b7d00a36f8fad7f798a3247f62b4fc752ee7f4ade387cceb1"} Jan 31 15:52:01 crc kubenswrapper[4735]: I0131 15:52:01.942529 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/must-gather-bxn59" event={"ID":"fc3bde80-85c4-436f-8741-d8c1248bdec8","Type":"ContainerStarted","Data":"d981c648dbd7beb2f616bc7f9496ab3b2c43304c0fe29bdbf2b0ee9cea8b8d42"} Jan 31 15:52:01 crc kubenswrapper[4735]: I0131 15:52:01.943183 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/must-gather-bxn59" event={"ID":"fc3bde80-85c4-436f-8741-d8c1248bdec8","Type":"ContainerStarted","Data":"3a7fedc66b09c5c250abf7e0a810c22087d528eddc1c42525dd7dbe6e0e1c981"} Jan 31 15:52:01 crc kubenswrapper[4735]: I0131 15:52:01.962168 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9jq2z/must-gather-bxn59" podStartSLOduration=1.877560165 podStartE2EDuration="5.962146077s" podCreationTimestamp="2026-01-31 15:51:56 +0000 UTC" firstStartedPulling="2026-01-31 15:51:57.007948579 +0000 UTC m=+3202.781277621" lastFinishedPulling="2026-01-31 15:52:01.092534461 +0000 UTC m=+3206.865863533" observedRunningTime="2026-01-31 15:52:01.958356611 +0000 UTC m=+3207.731685653" watchObservedRunningTime="2026-01-31 15:52:01.962146077 +0000 UTC m=+3207.735475149" Jan 31 15:52:02 crc kubenswrapper[4735]: I0131 15:52:02.540769 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:52:02 crc kubenswrapper[4735]: E0131 15:52:02.541242 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.250531 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-hjzlr"] Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.252402 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.335277 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.335380 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdpp7\" (UniqueName: \"kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.437392 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.437494 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdpp7\" (UniqueName: \"kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.437585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.466090 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdpp7\" (UniqueName: \"kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7\") pod \"crc-debug-hjzlr\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.572371 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:52:05 crc kubenswrapper[4735]: W0131 15:52:05.605999 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4840a2c1_42d5_4e96_96a2_ad4736633eaa.slice/crio-814248e5f660d4b74e0bc5293ad272d301242632ff288f9806dacdaa5efe578d WatchSource:0}: Error finding container 814248e5f660d4b74e0bc5293ad272d301242632ff288f9806dacdaa5efe578d: Status 404 returned error can't find the container with id 814248e5f660d4b74e0bc5293ad272d301242632ff288f9806dacdaa5efe578d Jan 31 15:52:05 crc kubenswrapper[4735]: I0131 15:52:05.982441 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" event={"ID":"4840a2c1-42d5-4e96-96a2-ad4736633eaa","Type":"ContainerStarted","Data":"814248e5f660d4b74e0bc5293ad272d301242632ff288f9806dacdaa5efe578d"} Jan 31 15:52:17 crc kubenswrapper[4735]: I0131 15:52:17.540294 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:52:18 crc kubenswrapper[4735]: I0131 15:52:18.089689 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97"} Jan 31 15:52:18 crc kubenswrapper[4735]: I0131 15:52:18.091580 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" event={"ID":"4840a2c1-42d5-4e96-96a2-ad4736633eaa","Type":"ContainerStarted","Data":"ca23ae4c50e5cac8dce8d6c5eb137aa7e59e300a35ab319345d27a6391fed110"} Jan 31 15:52:18 crc kubenswrapper[4735]: I0131 15:52:18.123897 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" podStartSLOduration=0.928615656 podStartE2EDuration="13.123870556s" podCreationTimestamp="2026-01-31 15:52:05 +0000 UTC" firstStartedPulling="2026-01-31 15:52:05.60905024 +0000 UTC m=+3211.382379282" lastFinishedPulling="2026-01-31 15:52:17.80430514 +0000 UTC m=+3223.577634182" observedRunningTime="2026-01-31 15:52:18.117731144 +0000 UTC m=+3223.891060206" watchObservedRunningTime="2026-01-31 15:52:18.123870556 +0000 UTC m=+3223.897199598" Jan 31 15:52:59 crc kubenswrapper[4735]: I0131 15:52:59.482175 4735 generic.go:334] "Generic (PLEG): container finished" podID="4840a2c1-42d5-4e96-96a2-ad4736633eaa" containerID="ca23ae4c50e5cac8dce8d6c5eb137aa7e59e300a35ab319345d27a6391fed110" exitCode=0 Jan 31 15:52:59 crc kubenswrapper[4735]: I0131 15:52:59.482655 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" event={"ID":"4840a2c1-42d5-4e96-96a2-ad4736633eaa","Type":"ContainerDied","Data":"ca23ae4c50e5cac8dce8d6c5eb137aa7e59e300a35ab319345d27a6391fed110"} Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.615693 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.651503 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-hjzlr"] Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.660371 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-hjzlr"] Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.781596 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host\") pod \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.781657 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdpp7\" (UniqueName: \"kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7\") pod \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\" (UID: \"4840a2c1-42d5-4e96-96a2-ad4736633eaa\") " Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.782027 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host" (OuterVolumeSpecName: "host") pod "4840a2c1-42d5-4e96-96a2-ad4736633eaa" (UID: "4840a2c1-42d5-4e96-96a2-ad4736633eaa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.782563 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4840a2c1-42d5-4e96-96a2-ad4736633eaa-host\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.788511 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7" (OuterVolumeSpecName: "kube-api-access-cdpp7") pod "4840a2c1-42d5-4e96-96a2-ad4736633eaa" (UID: "4840a2c1-42d5-4e96-96a2-ad4736633eaa"). InnerVolumeSpecName "kube-api-access-cdpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:53:00 crc kubenswrapper[4735]: I0131 15:53:00.883963 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdpp7\" (UniqueName: \"kubernetes.io/projected/4840a2c1-42d5-4e96-96a2-ad4736633eaa-kube-api-access-cdpp7\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.503483 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814248e5f660d4b74e0bc5293ad272d301242632ff288f9806dacdaa5efe578d" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.503525 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-hjzlr" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.556263 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4840a2c1-42d5-4e96-96a2-ad4736633eaa" path="/var/lib/kubelet/pods/4840a2c1-42d5-4e96-96a2-ad4736633eaa/volumes" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.823830 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-tzzxr"] Jan 31 15:53:01 crc kubenswrapper[4735]: E0131 15:53:01.825872 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4840a2c1-42d5-4e96-96a2-ad4736633eaa" containerName="container-00" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.825907 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4840a2c1-42d5-4e96-96a2-ad4736633eaa" containerName="container-00" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.826243 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4840a2c1-42d5-4e96-96a2-ad4736633eaa" containerName="container-00" Jan 31 15:53:01 crc kubenswrapper[4735]: I0131 15:53:01.827282 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.003064 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxmg\" (UniqueName: \"kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.003182 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.105907 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qxmg\" (UniqueName: \"kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.106382 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.106483 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.126414 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qxmg\" (UniqueName: \"kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg\") pod \"crc-debug-tzzxr\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.147266 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.512409 4735 generic.go:334] "Generic (PLEG): container finished" podID="f3b76b74-bb9a-4954-8717-e44bdf23e86f" containerID="d84e7fb7336e9c83beed00855a077065ecdfc5f614afc87149a0dd27318c0bea" exitCode=0 Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.512500 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" event={"ID":"f3b76b74-bb9a-4954-8717-e44bdf23e86f","Type":"ContainerDied","Data":"d84e7fb7336e9c83beed00855a077065ecdfc5f614afc87149a0dd27318c0bea"} Jan 31 15:53:02 crc kubenswrapper[4735]: I0131 15:53:02.512670 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" event={"ID":"f3b76b74-bb9a-4954-8717-e44bdf23e86f","Type":"ContainerStarted","Data":"d78bed2e39320b98584dd28af5d800d4b7a6aefb9a22616413fc592dd6312ea3"} Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.068138 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-tzzxr"] Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.075196 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-tzzxr"] Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.646807 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.834382 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qxmg\" (UniqueName: \"kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg\") pod \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.834914 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host\") pod \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\" (UID: \"f3b76b74-bb9a-4954-8717-e44bdf23e86f\") " Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.835114 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host" (OuterVolumeSpecName: "host") pod "f3b76b74-bb9a-4954-8717-e44bdf23e86f" (UID: "f3b76b74-bb9a-4954-8717-e44bdf23e86f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.835677 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3b76b74-bb9a-4954-8717-e44bdf23e86f-host\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.843905 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg" (OuterVolumeSpecName: "kube-api-access-9qxmg") pod "f3b76b74-bb9a-4954-8717-e44bdf23e86f" (UID: "f3b76b74-bb9a-4954-8717-e44bdf23e86f"). InnerVolumeSpecName "kube-api-access-9qxmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:53:03 crc kubenswrapper[4735]: I0131 15:53:03.937861 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qxmg\" (UniqueName: \"kubernetes.io/projected/f3b76b74-bb9a-4954-8717-e44bdf23e86f-kube-api-access-9qxmg\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.280322 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-qcp65"] Jan 31 15:53:04 crc kubenswrapper[4735]: E0131 15:53:04.280691 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b76b74-bb9a-4954-8717-e44bdf23e86f" containerName="container-00" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.280702 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b76b74-bb9a-4954-8717-e44bdf23e86f" containerName="container-00" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.280888 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b76b74-bb9a-4954-8717-e44bdf23e86f" containerName="container-00" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.281470 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.451822 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.452110 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqvr\" (UniqueName: \"kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.540960 4735 scope.go:117] "RemoveContainer" containerID="d84e7fb7336e9c83beed00855a077065ecdfc5f614afc87149a0dd27318c0bea" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.541023 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-tzzxr" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.554100 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.554251 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqvr\" (UniqueName: \"kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.554823 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.582346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqvr\" (UniqueName: \"kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr\") pod \"crc-debug-qcp65\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: I0131 15:53:04.611243 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:04 crc kubenswrapper[4735]: W0131 15:53:04.651068 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32a52e9_3782_4742_a735_2aeb66c3f8bf.slice/crio-a17cd4896f353eddfcc2d283aaea39f4a131ec825553bff387b1c11d46fc062f WatchSource:0}: Error finding container a17cd4896f353eddfcc2d283aaea39f4a131ec825553bff387b1c11d46fc062f: Status 404 returned error can't find the container with id a17cd4896f353eddfcc2d283aaea39f4a131ec825553bff387b1c11d46fc062f Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.555149 4735 generic.go:334] "Generic (PLEG): container finished" podID="c32a52e9-3782-4742-a735-2aeb66c3f8bf" containerID="ea9c925e42873f41b8a6f603fd09054089c0deaa6c99ea35bce3cb31520c29c3" exitCode=0 Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.567394 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b76b74-bb9a-4954-8717-e44bdf23e86f" path="/var/lib/kubelet/pods/f3b76b74-bb9a-4954-8717-e44bdf23e86f/volumes" Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.568471 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" event={"ID":"c32a52e9-3782-4742-a735-2aeb66c3f8bf","Type":"ContainerDied","Data":"ea9c925e42873f41b8a6f603fd09054089c0deaa6c99ea35bce3cb31520c29c3"} Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.568535 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" event={"ID":"c32a52e9-3782-4742-a735-2aeb66c3f8bf","Type":"ContainerStarted","Data":"a17cd4896f353eddfcc2d283aaea39f4a131ec825553bff387b1c11d46fc062f"} Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.623534 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-qcp65"] Jan 31 15:53:05 crc kubenswrapper[4735]: I0131 15:53:05.634885 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jq2z/crc-debug-qcp65"] Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.673114 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.797831 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host\") pod \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.797963 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzqvr\" (UniqueName: \"kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr\") pod \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\" (UID: \"c32a52e9-3782-4742-a735-2aeb66c3f8bf\") " Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.797990 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host" (OuterVolumeSpecName: "host") pod "c32a52e9-3782-4742-a735-2aeb66c3f8bf" (UID: "c32a52e9-3782-4742-a735-2aeb66c3f8bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.798370 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c32a52e9-3782-4742-a735-2aeb66c3f8bf-host\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.803030 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr" (OuterVolumeSpecName: "kube-api-access-hzqvr") pod "c32a52e9-3782-4742-a735-2aeb66c3f8bf" (UID: "c32a52e9-3782-4742-a735-2aeb66c3f8bf"). InnerVolumeSpecName "kube-api-access-hzqvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:53:06 crc kubenswrapper[4735]: I0131 15:53:06.899760 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzqvr\" (UniqueName: \"kubernetes.io/projected/c32a52e9-3782-4742-a735-2aeb66c3f8bf-kube-api-access-hzqvr\") on node \"crc\" DevicePath \"\"" Jan 31 15:53:07 crc kubenswrapper[4735]: I0131 15:53:07.550217 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32a52e9-3782-4742-a735-2aeb66c3f8bf" path="/var/lib/kubelet/pods/c32a52e9-3782-4742-a735-2aeb66c3f8bf/volumes" Jan 31 15:53:07 crc kubenswrapper[4735]: I0131 15:53:07.577104 4735 scope.go:117] "RemoveContainer" containerID="ea9c925e42873f41b8a6f603fd09054089c0deaa6c99ea35bce3cb31520c29c3" Jan 31 15:53:07 crc kubenswrapper[4735]: I0131 15:53:07.577214 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/crc-debug-qcp65" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.259009 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-844d5857fb-gs56h_89ea2bff-49c7-4b54-a026-c7c632da1b0c/barbican-api/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.448292 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-844d5857fb-gs56h_89ea2bff-49c7-4b54-a026-c7c632da1b0c/barbican-api-log/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.495098 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c7cb96f6b-ctvpf_df8f8f18-32ae-4729-9e50-304d7dfdbf07/barbican-keystone-listener/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.592126 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c7cb96f6b-ctvpf_df8f8f18-32ae-4729-9e50-304d7dfdbf07/barbican-keystone-listener-log/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.692209 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66c7c44665-k447s_c8df5434-b30b-49b1-9130-b152a98f3af0/barbican-worker-log/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.714830 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66c7c44665-k447s_c8df5434-b30b-49b1-9130-b152a98f3af0/barbican-worker/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.904563 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7_8b16424b-3400-4f1a-931f-f0a2a398859c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:21 crc kubenswrapper[4735]: I0131 15:53:21.935373 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/ceilometer-central-agent/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.049896 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/ceilometer-notification-agent/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.082925 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/proxy-httpd/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.095849 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/sg-core/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.228445 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f/cinder-api/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.284015 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f/cinder-api-log/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.405060 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_475cce0c-6c29-41e3-8c56-b5368f1b9e92/cinder-scheduler/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.485002 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hspmq_a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.492239 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_475cce0c-6c29-41e3-8c56-b5368f1b9e92/probe/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.672467 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9_1898e10e-7cb4-453e-84f8-ee45e1b109a3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.775601 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/init/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.928694 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/init/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.957085 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/dnsmasq-dns/0.log" Jan 31 15:53:22 crc kubenswrapper[4735]: I0131 15:53:22.994755 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n87t9_f3889565-cd9c-4a0a-80d5-09bc3f0e0a83/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.145220 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5620f33b-a10a-41ae-a9f2-707f94ebbe59/glance-httpd/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.189727 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5620f33b-a10a-41ae-a9f2-707f94ebbe59/glance-log/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.356098 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fdceee3d-5a28-4c46-bd6e-40048cdd56c9/glance-httpd/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.392060 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fdceee3d-5a28-4c46-bd6e-40048cdd56c9/glance-log/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.516702 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-784979f994-vtd4m_c022909b-46cd-4e9d-851e-483e23358bd8/horizon/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.654499 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf_2cfb222c-2a44-4521-af34-3d352b0cfdea/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.827925 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-784979f994-vtd4m_c022909b-46cd-4e9d-851e-483e23358bd8/horizon-log/0.log" Jan 31 15:53:23 crc kubenswrapper[4735]: I0131 15:53:23.872550 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-964lq_7737dd79-8b1c-448a-a81d-5a06b58e32e1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.093883 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b94ccc6d9-2fktc_3d196991-4f4f-4bb3-a113-b33659619f09/keystone-api/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.099909 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_336b74ed-3ea3-4963-9497-a02b65b80a3e/kube-state-metrics/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.264637 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b49fw_dfa49e11-fd8b-4933-b184-a524c747ee02/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.593589 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cd5d5cd9-8xjdv_e70f9259-db47-4290-9778-8bf2849a809a/neutron-api/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.639242 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cd5d5cd9-8xjdv_e70f9259-db47-4290-9778-8bf2849a809a/neutron-httpd/0.log" Jan 31 15:53:24 crc kubenswrapper[4735]: I0131 15:53:24.968047 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg_bdcd1840-f7b6-41b8-bad9-43e441f1cad9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.338172 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7fb482d0-dc39-4b62-81e7-c680dc211c0b/nova-api-log/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.374744 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2e5c1a1c-5aa4-428a-8729-77ce2cb81992/nova-cell0-conductor-conductor/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.491056 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7fb482d0-dc39-4b62-81e7-c680dc211c0b/nova-api-api/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.652614 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1a55c776-7b88-4c64-a106-b2f8619425a7/nova-cell1-conductor-conductor/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.681613 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ecaa245-bfd5-42b9-b10f-117b0dfef5cb/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.799717 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-grs6l_c6cf70ec-3fc5-4144-8756-bcf0a8704416/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:25 crc kubenswrapper[4735]: I0131 15:53:25.974369 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_df24097c-68e3-4bbc-b56b-cc19e5e91ea6/nova-metadata-log/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.309860 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/mysql-bootstrap/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.312563 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_622b97c5-deed-459f-a9a1-c407b424f921/nova-scheduler-scheduler/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.529305 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/mysql-bootstrap/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.561357 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/galera/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.772837 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/mysql-bootstrap/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.927420 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/mysql-bootstrap/0.log" Jan 31 15:53:26 crc kubenswrapper[4735]: I0131 15:53:26.935069 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/galera/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.000753 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_df24097c-68e3-4bbc-b56b-cc19e5e91ea6/nova-metadata-metadata/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.247048 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75603184-bd90-47b2-a5e2-c06e0c205001/openstackclient/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.248077 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2vhbk_07524504-28f6-44cc-8630-2e736f87ff3d/ovn-controller/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.424996 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fjc7w_ce44083a-52e6-45b6-bd3f-90ae832c54fa/openstack-network-exporter/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.479669 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server-init/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.729157 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovs-vswitchd/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.783167 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server-init/0.log" Jan 31 15:53:27 crc kubenswrapper[4735]: I0131 15:53:27.846940 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.032617 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rq2xl_67db3c53-552c-458d-b333-09ad7b0f0447/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.038104 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bbe4f564-44b1-441d-aed3-b08ad06141c6/openstack-network-exporter/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.096002 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bbe4f564-44b1-441d-aed3-b08ad06141c6/ovn-northd/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.283087 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_667db586-48c3-4b33-8e39-eb27c45d7841/openstack-network-exporter/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.293076 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_667db586-48c3-4b33-8e39-eb27c45d7841/ovsdbserver-nb/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.498526 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7ff11e61-5fe3-474b-ac0d-8a89a364de0e/ovsdbserver-sb/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.517137 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7ff11e61-5fe3-474b-ac0d-8a89a364de0e/openstack-network-exporter/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.679316 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-766f55bc7b-w8qbt_a0e33520-34a2-4009-9f61-7b6211fa8744/placement-api/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.804073 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/setup-container/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.827323 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-766f55bc7b-w8qbt_a0e33520-34a2-4009-9f61-7b6211fa8744/placement-log/0.log" Jan 31 15:53:28 crc kubenswrapper[4735]: I0131 15:53:28.990405 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/setup-container/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.026727 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/rabbitmq/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.038563 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/setup-container/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.238409 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj_0e45ff90-974c-42ba-986c-9303f5cde30f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.244029 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/rabbitmq/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.258763 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/setup-container/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.416193 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9thw4_6468a09d-c7d4-428a-bfa1-50c28830f709/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.574798 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm_2bdb9dbd-b178-43f6-985a-1b19f40820cd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.745123 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8rvd2_c882b7b4-b823-41c7-98cd-862f19262e18/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:29 crc kubenswrapper[4735]: I0131 15:53:29.811824 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zlgxw_ddf071a4-728c-470f-829d-c905a4b60f9d/ssh-known-hosts-edpm-deployment/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.022717 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f4f968b5f-slb77_bdf1b1c9-1210-4c8f-beba-1780efc67349/proxy-server/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.068979 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f4f968b5f-slb77_bdf1b1c9-1210-4c8f-beba-1780efc67349/proxy-httpd/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.310415 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6np6m_83dc2b98-a7e1-4654-95cf-fd37532fa571/swift-ring-rebalance/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.332903 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-auditor/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.479635 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-reaper/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.552880 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-replicator/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.565698 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-server/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.637844 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-auditor/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.721999 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-replicator/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.745471 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-updater/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.788382 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-server/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.935580 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-expirer/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.966947 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-replicator/0.log" Jan 31 15:53:30 crc kubenswrapper[4735]: I0131 15:53:30.989349 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-auditor/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.010641 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-server/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.130015 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/rsync/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.139140 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-updater/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.172840 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/swift-recon-cron/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.402830 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cdsth_ad304d37-c310-4b94-b535-b75f3ee49e81/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.436643 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f/tempest-tests-tempest-tests-runner/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.611103 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9bcab996-865d-4021-bd63-7e17f6093145/test-operator-logs-container/0.log" Jan 31 15:53:31 crc kubenswrapper[4735]: I0131 15:53:31.676881 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-sg76l_a02498a0-04e3-4062-b19f-aa22ab9544a3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 15:53:40 crc kubenswrapper[4735]: I0131 15:53:40.886918 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bb53ef9b-e389-4e78-a677-5def022eab7e/memcached/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.181919 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.352648 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.382562 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.409968 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.545641 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.551919 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/extract/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.552301 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.818823 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-drsgx_a27712fb-eb89-49ff-b5a5-1432a0a4774f/manager/0.log" Jan 31 15:53:57 crc kubenswrapper[4735]: I0131 15:53:57.842416 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-qh6xs_6cc9c424-b3f7-4744-92d8-5844915879bf/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.008232 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-bxf2k_d0a68002-1422-44d3-8656-2901a42b42f4/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.078009 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64d858bbbd-k4bh2_e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.182917 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-r7xlm_c4915a12-75dc-4b2e-a039-c98287c8cec4/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.289911 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-bb6l7_627cef1f-bb76-4dd2-b7d1-b3f55bdeb335/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.501393 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-87bd9d46f-hzrws_ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.629316 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-66z2p_8d42c163-9e7d-485f-b94e-4796166ba8f9/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.725440 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-rfrcn_0aef4ea8-3e5e-497e-b2bd-280d521e895f/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.798760 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-jxthf_f4b1920b-1fb0-4f10-a3fc-97d19aacc34e/manager/0.log" Jan 31 15:53:58 crc kubenswrapper[4735]: I0131 15:53:58.934434 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-7v4mn_5926394d-8ab0-46d7-9bb6-1ea59a0d7511/manager/0.log" Jan 31 15:53:59 crc kubenswrapper[4735]: I0131 15:53:59.033231 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-rtzcv_c814b622-e60d-492c-ae86-9e78b37297e4/manager/0.log" Jan 31 15:53:59 crc kubenswrapper[4735]: I0131 15:53:59.204299 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-f8h8s_f2610081-f50c-441f-8b8a-bc2a236065f1/manager/0.log" Jan 31 15:53:59 crc kubenswrapper[4735]: I0131 15:53:59.416633 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk_437ef1c6-09b5-45c2-b88d-e42e432ae801/manager/0.log" Jan 31 15:53:59 crc kubenswrapper[4735]: I0131 15:53:59.806485 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b494f4958-8qvfp_7043e467-d103-458b-a498-c110f06809f1/operator/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.008758 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pp8kj_5a0ed87d-afcc-44e0-a590-4f56b4338cb7/registry-server/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.277973 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-thcqg_8bc95764-b0cb-4206-af35-fefb00d8c71f/manager/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.349755 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b89ddb58-f8f64_b469fe09-816f-4ffa-a61d-82e448011837/manager/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.698190 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-zgmmx_bc56f00a-31c6-474b-af93-59442f956567/manager/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.812493 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lmv8c_9ba40dc8-290a-4a40-a039-609874c181d4/operator/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.848023 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-55f4d66b54-gcks8_3808aa6d-1386-4e9a-81b2-e37c11246170/manager/0.log" Jan 31 15:54:00 crc kubenswrapper[4735]: I0131 15:54:00.936313 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76864d4fdb-ps2jp_1cd70b29-6ef8-4625-93eb-f7113200b385/manager/0.log" Jan 31 15:54:01 crc kubenswrapper[4735]: I0131 15:54:01.071086 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-8446785844-jtbmg_80924e89-7cef-4879-b955-28d3ef271729/manager/0.log" Jan 31 15:54:01 crc kubenswrapper[4735]: I0131 15:54:01.108945 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-dlpt5_57aa7be3-f130-41b7-a400-1c2ddd1b8ce3/manager/0.log" Jan 31 15:54:01 crc kubenswrapper[4735]: I0131 15:54:01.188939 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-pqmvf_0c4b1bae-6cff-4914-907c-f6c9867a803b/manager/0.log" Jan 31 15:54:21 crc kubenswrapper[4735]: I0131 15:54:21.132704 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nf8zl_52d5f4fc-bb86-426a-b56e-810e4ffc1315/control-plane-machine-set-operator/0.log" Jan 31 15:54:21 crc kubenswrapper[4735]: I0131 15:54:21.319400 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9q4gk_d4340fb1-7455-4140-9c75-2d075ea0306c/kube-rbac-proxy/0.log" Jan 31 15:54:21 crc kubenswrapper[4735]: I0131 15:54:21.355562 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9q4gk_d4340fb1-7455-4140-9c75-2d075ea0306c/machine-api-operator/0.log" Jan 31 15:54:35 crc kubenswrapper[4735]: I0131 15:54:35.889106 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2x2bm_1715eb23-6cf5-4a8f-9d53-11fae6b38859/cert-manager-controller/0.log" Jan 31 15:54:36 crc kubenswrapper[4735]: I0131 15:54:36.002481 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5t4c2_3cca71ec-1c99-4260-9208-9e4202ff3e3e/cert-manager-cainjector/0.log" Jan 31 15:54:36 crc kubenswrapper[4735]: I0131 15:54:36.040452 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25p6r_7032d6de-d341-4686-a7c9-f470bf8237cb/cert-manager-webhook/0.log" Jan 31 15:54:37 crc kubenswrapper[4735]: I0131 15:54:37.346390 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:54:37 crc kubenswrapper[4735]: I0131 15:54:37.346745 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.557680 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-4r6xg_ecdc44de-8e1b-477a-860f-780a279594cc/nmstate-console-plugin/0.log" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.587223 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hs75p_c18ba473-8399-4059-a6c4-22990f6e1cfe/nmstate-handler/0.log" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.744472 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6cbvj_69c76992-f3a3-4e9a-bc71-0eb6a7852b6e/nmstate-metrics/0.log" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.744874 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6cbvj_69c76992-f3a3-4e9a-bc71-0eb6a7852b6e/kube-rbac-proxy/0.log" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.900916 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9rvf2_a11f4f28-7d8c-439b-9e8a-903060113cf4/nmstate-operator/0.log" Jan 31 15:54:50 crc kubenswrapper[4735]: I0131 15:54:50.964546 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lztrk_cd12d118-a925-4765-a3c4-38e34aa3c548/nmstate-webhook/0.log" Jan 31 15:55:07 crc kubenswrapper[4735]: I0131 15:55:07.346129 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:55:07 crc kubenswrapper[4735]: I0131 15:55:07.346883 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.691274 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:10 crc kubenswrapper[4735]: E0131 15:55:10.692512 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32a52e9-3782-4742-a735-2aeb66c3f8bf" containerName="container-00" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.692533 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32a52e9-3782-4742-a735-2aeb66c3f8bf" containerName="container-00" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.692871 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32a52e9-3782-4742-a735-2aeb66c3f8bf" containerName="container-00" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.695149 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.700279 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.722870 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.723001 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8rs\" (UniqueName: \"kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.723033 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.824509 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.824699 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8rs\" (UniqueName: \"kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.824724 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.825018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.825288 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:10 crc kubenswrapper[4735]: I0131 15:55:10.861378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8rs\" (UniqueName: \"kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs\") pod \"redhat-operators-bxgmk\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:11 crc kubenswrapper[4735]: I0131 15:55:11.023270 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:11 crc kubenswrapper[4735]: I0131 15:55:11.595379 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:11 crc kubenswrapper[4735]: I0131 15:55:11.758487 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerStarted","Data":"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e"} Jan 31 15:55:11 crc kubenswrapper[4735]: I0131 15:55:11.758723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerStarted","Data":"036d839717f2919759563b59d978f0f130681d7709b3060f0f433041c02b31f2"} Jan 31 15:55:12 crc kubenswrapper[4735]: I0131 15:55:12.774521 4735 generic.go:334] "Generic (PLEG): container finished" podID="3df52158-b1be-475a-a379-84d5ca450ffe" containerID="8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e" exitCode=0 Jan 31 15:55:12 crc kubenswrapper[4735]: I0131 15:55:12.774787 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerDied","Data":"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e"} Jan 31 15:55:13 crc kubenswrapper[4735]: I0131 15:55:13.788317 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerStarted","Data":"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775"} Jan 31 15:55:14 crc kubenswrapper[4735]: I0131 15:55:14.804255 4735 generic.go:334] "Generic (PLEG): container finished" podID="3df52158-b1be-475a-a379-84d5ca450ffe" containerID="0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775" exitCode=0 Jan 31 15:55:14 crc kubenswrapper[4735]: I0131 15:55:14.804480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerDied","Data":"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775"} Jan 31 15:55:15 crc kubenswrapper[4735]: I0131 15:55:15.818082 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerStarted","Data":"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83"} Jan 31 15:55:19 crc kubenswrapper[4735]: I0131 15:55:19.737894 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-j9l75_129f74a8-c107-4f72-9972-d2c81e811b93/controller/0.log" Jan 31 15:55:19 crc kubenswrapper[4735]: I0131 15:55:19.760203 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-j9l75_129f74a8-c107-4f72-9972-d2c81e811b93/kube-rbac-proxy/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.060497 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.242984 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.267452 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.283183 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.340678 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.502846 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.534877 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.573380 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.619489 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.862841 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.913842 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 15:55:20 crc kubenswrapper[4735]: I0131 15:55:20.916310 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.024191 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.024244 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.141286 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/kube-rbac-proxy/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.394201 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/kube-rbac-proxy-frr/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.395957 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/controller/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.480026 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/frr-metrics/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.840616 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/reloader/0.log" Jan 31 15:55:21 crc kubenswrapper[4735]: I0131 15:55:21.895036 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-849x9_441f5a71-b5fa-4f6f-a825-40eb055760a0/frr-k8s-webhook-server/0.log" Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.077572 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bxgmk" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="registry-server" probeResult="failure" output=< Jan 31 15:55:22 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 15:55:22 crc kubenswrapper[4735]: > Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.156481 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-866454bfd-gxbsb_0f2b4446-8543-4182-bf59-d1be74b899c9/manager/0.log" Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.389883 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-54d6fb8967-d6d4c_f84a6826-6439-4751-a46e-84a04759c021/webhook-server/0.log" Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.504168 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7cgt_38acb809-064d-43d6-8800-40cd1cf7f89a/kube-rbac-proxy/0.log" Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.517569 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/frr/0.log" Jan 31 15:55:22 crc kubenswrapper[4735]: I0131 15:55:22.870692 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7cgt_38acb809-064d-43d6-8800-40cd1cf7f89a/speaker/0.log" Jan 31 15:55:31 crc kubenswrapper[4735]: I0131 15:55:31.104093 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:31 crc kubenswrapper[4735]: I0131 15:55:31.131480 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bxgmk" podStartSLOduration=18.676127952 podStartE2EDuration="21.131463963s" podCreationTimestamp="2026-01-31 15:55:10 +0000 UTC" firstStartedPulling="2026-01-31 15:55:12.778827821 +0000 UTC m=+3398.552156873" lastFinishedPulling="2026-01-31 15:55:15.234163822 +0000 UTC m=+3401.007492884" observedRunningTime="2026-01-31 15:55:15.840119029 +0000 UTC m=+3401.613448101" watchObservedRunningTime="2026-01-31 15:55:31.131463963 +0000 UTC m=+3416.904793005" Jan 31 15:55:31 crc kubenswrapper[4735]: I0131 15:55:31.182183 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:31 crc kubenswrapper[4735]: I0131 15:55:31.350855 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.023733 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bxgmk" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="registry-server" containerID="cri-o://fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83" gracePeriod=2 Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.531298 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.558816 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content\") pod \"3df52158-b1be-475a-a379-84d5ca450ffe\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.558899 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities\") pod \"3df52158-b1be-475a-a379-84d5ca450ffe\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.559070 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c8rs\" (UniqueName: \"kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs\") pod \"3df52158-b1be-475a-a379-84d5ca450ffe\" (UID: \"3df52158-b1be-475a-a379-84d5ca450ffe\") " Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.573747 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities" (OuterVolumeSpecName: "utilities") pod "3df52158-b1be-475a-a379-84d5ca450ffe" (UID: "3df52158-b1be-475a-a379-84d5ca450ffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.594523 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs" (OuterVolumeSpecName: "kube-api-access-9c8rs") pod "3df52158-b1be-475a-a379-84d5ca450ffe" (UID: "3df52158-b1be-475a-a379-84d5ca450ffe"). InnerVolumeSpecName "kube-api-access-9c8rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.661038 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c8rs\" (UniqueName: \"kubernetes.io/projected/3df52158-b1be-475a-a379-84d5ca450ffe-kube-api-access-9c8rs\") on node \"crc\" DevicePath \"\"" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.661075 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.680484 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df52158-b1be-475a-a379-84d5ca450ffe" (UID: "3df52158-b1be-475a-a379-84d5ca450ffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:55:33 crc kubenswrapper[4735]: I0131 15:55:33.763170 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df52158-b1be-475a-a379-84d5ca450ffe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.039862 4735 generic.go:334] "Generic (PLEG): container finished" podID="3df52158-b1be-475a-a379-84d5ca450ffe" containerID="fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83" exitCode=0 Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.039924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerDied","Data":"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83"} Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.039963 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxgmk" event={"ID":"3df52158-b1be-475a-a379-84d5ca450ffe","Type":"ContainerDied","Data":"036d839717f2919759563b59d978f0f130681d7709b3060f0f433041c02b31f2"} Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.039965 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxgmk" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.039993 4735 scope.go:117] "RemoveContainer" containerID="fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.084735 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.090597 4735 scope.go:117] "RemoveContainer" containerID="0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.093559 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bxgmk"] Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.141742 4735 scope.go:117] "RemoveContainer" containerID="8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.173844 4735 scope.go:117] "RemoveContainer" containerID="fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83" Jan 31 15:55:34 crc kubenswrapper[4735]: E0131 15:55:34.174462 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83\": container with ID starting with fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83 not found: ID does not exist" containerID="fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.174509 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83"} err="failed to get container status \"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83\": rpc error: code = NotFound desc = could not find container \"fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83\": container with ID starting with fddf4ca53c03d094a7f964441b9041a9c79258e03ea338802cd32f100c322d83 not found: ID does not exist" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.174538 4735 scope.go:117] "RemoveContainer" containerID="0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775" Jan 31 15:55:34 crc kubenswrapper[4735]: E0131 15:55:34.175065 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775\": container with ID starting with 0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775 not found: ID does not exist" containerID="0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.175108 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775"} err="failed to get container status \"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775\": rpc error: code = NotFound desc = could not find container \"0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775\": container with ID starting with 0d89d6014a0b17798aeac475763c6fb9e77d2e55a377043af922b8d2c41cd775 not found: ID does not exist" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.175142 4735 scope.go:117] "RemoveContainer" containerID="8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e" Jan 31 15:55:34 crc kubenswrapper[4735]: E0131 15:55:34.175629 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e\": container with ID starting with 8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e not found: ID does not exist" containerID="8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e" Jan 31 15:55:34 crc kubenswrapper[4735]: I0131 15:55:34.175662 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e"} err="failed to get container status \"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e\": rpc error: code = NotFound desc = could not find container \"8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e\": container with ID starting with 8d2ee165ee08766ac5802eb681d5ad2c175e8efb582eee2a1d9c210bc6c6297e not found: ID does not exist" Jan 31 15:55:35 crc kubenswrapper[4735]: I0131 15:55:35.555324 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" path="/var/lib/kubelet/pods/3df52158-b1be-475a-a379-84d5ca450ffe/volumes" Jan 31 15:55:37 crc kubenswrapper[4735]: I0131 15:55:37.345691 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:55:37 crc kubenswrapper[4735]: I0131 15:55:37.346018 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:55:37 crc kubenswrapper[4735]: I0131 15:55:37.346068 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:55:37 crc kubenswrapper[4735]: I0131 15:55:37.346912 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:55:37 crc kubenswrapper[4735]: I0131 15:55:37.347008 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97" gracePeriod=600 Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.000133 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.080996 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97" exitCode=0 Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.081158 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97"} Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.081343 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87"} Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.081451 4735 scope.go:117] "RemoveContainer" containerID="5d3fcf6bfeb319e8e4b988185516d3b1534ecc6ea1d291754d95954e86ce1fe6" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.224352 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.236468 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.240694 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.413760 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.427192 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.476271 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/extract/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.580809 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.736738 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.755341 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.805659 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.927245 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.952254 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/extract/0.log" Jan 31 15:55:38 crc kubenswrapper[4735]: I0131 15:55:38.973118 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.108619 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.273610 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.318612 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.338912 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.434149 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.445069 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.647374 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.809372 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.827574 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/registry-server/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.854141 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 15:55:39 crc kubenswrapper[4735]: I0131 15:55:39.912465 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.113506 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.191214 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.357471 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5l67c_26db5009-aa32-4023-88bf-05ba79d4d907/marketplace-operator/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.516179 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.585711 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/registry-server/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.708958 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.751528 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.778026 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.923230 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 15:55:40 crc kubenswrapper[4735]: I0131 15:55:40.927328 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.061758 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/registry-server/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.141394 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.289971 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.306685 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.317928 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.531269 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.543121 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 15:55:41 crc kubenswrapper[4735]: I0131 15:55:41.887536 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/registry-server/0.log" Jan 31 15:56:07 crc kubenswrapper[4735]: E0131 15:56:07.309686 4735 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:35104->38.102.83.241:38007: write tcp 38.102.83.241:35104->38.102.83.241:38007: write: broken pipe Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.127146 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:19 crc kubenswrapper[4735]: E0131 15:57:19.128343 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="registry-server" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.128362 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="registry-server" Jan 31 15:57:19 crc kubenswrapper[4735]: E0131 15:57:19.128386 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="extract-utilities" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.128395 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="extract-utilities" Jan 31 15:57:19 crc kubenswrapper[4735]: E0131 15:57:19.128490 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="extract-content" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.128501 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="extract-content" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.128773 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df52158-b1be-475a-a379-84d5ca450ffe" containerName="registry-server" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.130611 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.141622 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.259679 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.259746 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvl5w\" (UniqueName: \"kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.260009 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.361611 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.361673 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvl5w\" (UniqueName: \"kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.361751 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.362207 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.362254 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.394182 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvl5w\" (UniqueName: \"kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w\") pod \"certified-operators-wx56d\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:19 crc kubenswrapper[4735]: I0131 15:57:19.472359 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:20 crc kubenswrapper[4735]: I0131 15:57:19.998591 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:20 crc kubenswrapper[4735]: I0131 15:57:20.238707 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerStarted","Data":"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0"} Jan 31 15:57:20 crc kubenswrapper[4735]: I0131 15:57:20.241525 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerStarted","Data":"e040b24a13005f5099acbcf65dbd8019f6b3c32b57360379348b846ff663b478"} Jan 31 15:57:21 crc kubenswrapper[4735]: I0131 15:57:21.248904 4735 generic.go:334] "Generic (PLEG): container finished" podID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerID="ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0" exitCode=0 Jan 31 15:57:21 crc kubenswrapper[4735]: I0131 15:57:21.249039 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerDied","Data":"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0"} Jan 31 15:57:21 crc kubenswrapper[4735]: I0131 15:57:21.252143 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:57:22 crc kubenswrapper[4735]: I0131 15:57:22.265810 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerStarted","Data":"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34"} Jan 31 15:57:23 crc kubenswrapper[4735]: I0131 15:57:23.278417 4735 generic.go:334] "Generic (PLEG): container finished" podID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerID="3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34" exitCode=0 Jan 31 15:57:23 crc kubenswrapper[4735]: I0131 15:57:23.278680 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerDied","Data":"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34"} Jan 31 15:57:24 crc kubenswrapper[4735]: I0131 15:57:24.297695 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerStarted","Data":"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12"} Jan 31 15:57:24 crc kubenswrapper[4735]: I0131 15:57:24.306626 4735 generic.go:334] "Generic (PLEG): container finished" podID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerID="3a7fedc66b09c5c250abf7e0a810c22087d528eddc1c42525dd7dbe6e0e1c981" exitCode=0 Jan 31 15:57:24 crc kubenswrapper[4735]: I0131 15:57:24.306682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9jq2z/must-gather-bxn59" event={"ID":"fc3bde80-85c4-436f-8741-d8c1248bdec8","Type":"ContainerDied","Data":"3a7fedc66b09c5c250abf7e0a810c22087d528eddc1c42525dd7dbe6e0e1c981"} Jan 31 15:57:24 crc kubenswrapper[4735]: I0131 15:57:24.307415 4735 scope.go:117] "RemoveContainer" containerID="3a7fedc66b09c5c250abf7e0a810c22087d528eddc1c42525dd7dbe6e0e1c981" Jan 31 15:57:24 crc kubenswrapper[4735]: I0131 15:57:24.341249 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wx56d" podStartSLOduration=2.847777486 podStartE2EDuration="5.34118982s" podCreationTimestamp="2026-01-31 15:57:19 +0000 UTC" firstStartedPulling="2026-01-31 15:57:21.251842192 +0000 UTC m=+3527.025171244" lastFinishedPulling="2026-01-31 15:57:23.745254536 +0000 UTC m=+3529.518583578" observedRunningTime="2026-01-31 15:57:24.329184792 +0000 UTC m=+3530.102513874" watchObservedRunningTime="2026-01-31 15:57:24.34118982 +0000 UTC m=+3530.114518902" Jan 31 15:57:25 crc kubenswrapper[4735]: I0131 15:57:25.374371 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jq2z_must-gather-bxn59_fc3bde80-85c4-436f-8741-d8c1248bdec8/gather/0.log" Jan 31 15:57:29 crc kubenswrapper[4735]: I0131 15:57:29.473031 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:29 crc kubenswrapper[4735]: I0131 15:57:29.473799 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:29 crc kubenswrapper[4735]: I0131 15:57:29.573180 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:30 crc kubenswrapper[4735]: I0131 15:57:30.451602 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:30 crc kubenswrapper[4735]: I0131 15:57:30.529160 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:32 crc kubenswrapper[4735]: I0131 15:57:32.401841 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wx56d" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="registry-server" containerID="cri-o://c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12" gracePeriod=2 Jan 31 15:57:32 crc kubenswrapper[4735]: I0131 15:57:32.985829 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.132023 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvl5w\" (UniqueName: \"kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w\") pod \"427b57fc-e5dd-412c-9e1f-a57c43c15213\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.132519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content\") pod \"427b57fc-e5dd-412c-9e1f-a57c43c15213\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.132981 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities\") pod \"427b57fc-e5dd-412c-9e1f-a57c43c15213\" (UID: \"427b57fc-e5dd-412c-9e1f-a57c43c15213\") " Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.135434 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities" (OuterVolumeSpecName: "utilities") pod "427b57fc-e5dd-412c-9e1f-a57c43c15213" (UID: "427b57fc-e5dd-412c-9e1f-a57c43c15213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.135698 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.137667 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w" (OuterVolumeSpecName: "kube-api-access-cvl5w") pod "427b57fc-e5dd-412c-9e1f-a57c43c15213" (UID: "427b57fc-e5dd-412c-9e1f-a57c43c15213"). InnerVolumeSpecName "kube-api-access-cvl5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.178607 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "427b57fc-e5dd-412c-9e1f-a57c43c15213" (UID: "427b57fc-e5dd-412c-9e1f-a57c43c15213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.237740 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvl5w\" (UniqueName: \"kubernetes.io/projected/427b57fc-e5dd-412c-9e1f-a57c43c15213-kube-api-access-cvl5w\") on node \"crc\" DevicePath \"\"" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.237780 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427b57fc-e5dd-412c-9e1f-a57c43c15213-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.410756 4735 generic.go:334] "Generic (PLEG): container finished" podID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerID="c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12" exitCode=0 Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.410817 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerDied","Data":"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12"} Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.410832 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx56d" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.410861 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx56d" event={"ID":"427b57fc-e5dd-412c-9e1f-a57c43c15213","Type":"ContainerDied","Data":"e040b24a13005f5099acbcf65dbd8019f6b3c32b57360379348b846ff663b478"} Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.410887 4735 scope.go:117] "RemoveContainer" containerID="c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.441390 4735 scope.go:117] "RemoveContainer" containerID="3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.442802 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.451276 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wx56d"] Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.462257 4735 scope.go:117] "RemoveContainer" containerID="ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.513847 4735 scope.go:117] "RemoveContainer" containerID="c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12" Jan 31 15:57:33 crc kubenswrapper[4735]: E0131 15:57:33.514229 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12\": container with ID starting with c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12 not found: ID does not exist" containerID="c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.514284 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12"} err="failed to get container status \"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12\": rpc error: code = NotFound desc = could not find container \"c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12\": container with ID starting with c6c5631cbaf9180c15d554ce84d9d20abd28f49516ff522f9012a5540d25ce12 not found: ID does not exist" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.514311 4735 scope.go:117] "RemoveContainer" containerID="3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34" Jan 31 15:57:33 crc kubenswrapper[4735]: E0131 15:57:33.514537 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34\": container with ID starting with 3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34 not found: ID does not exist" containerID="3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.514572 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34"} err="failed to get container status \"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34\": rpc error: code = NotFound desc = could not find container \"3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34\": container with ID starting with 3b289962f9f3f59246f94ba7d68e9fc84cfc6164476b39832439a3d11e132a34 not found: ID does not exist" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.514592 4735 scope.go:117] "RemoveContainer" containerID="ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0" Jan 31 15:57:33 crc kubenswrapper[4735]: E0131 15:57:33.514934 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0\": container with ID starting with ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0 not found: ID does not exist" containerID="ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.514966 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0"} err="failed to get container status \"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0\": rpc error: code = NotFound desc = could not find container \"ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0\": container with ID starting with ef124c7d3b70b993eec4249cb3c1015deac594fd03aca76bfbdfb1d9afa8fda0 not found: ID does not exist" Jan 31 15:57:33 crc kubenswrapper[4735]: I0131 15:57:33.553359 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" path="/var/lib/kubelet/pods/427b57fc-e5dd-412c-9e1f-a57c43c15213/volumes" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.029265 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9jq2z/must-gather-bxn59"] Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.029680 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9jq2z/must-gather-bxn59" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="copy" containerID="cri-o://d981c648dbd7beb2f616bc7f9496ab3b2c43304c0fe29bdbf2b0ee9cea8b8d42" gracePeriod=2 Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.041671 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9jq2z/must-gather-bxn59"] Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.421605 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jq2z_must-gather-bxn59_fc3bde80-85c4-436f-8741-d8c1248bdec8/copy/0.log" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.422241 4735 generic.go:334] "Generic (PLEG): container finished" podID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerID="d981c648dbd7beb2f616bc7f9496ab3b2c43304c0fe29bdbf2b0ee9cea8b8d42" exitCode=143 Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.490564 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jq2z_must-gather-bxn59_fc3bde80-85c4-436f-8741-d8c1248bdec8/copy/0.log" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.491001 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.663100 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output\") pod \"fc3bde80-85c4-436f-8741-d8c1248bdec8\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.663190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8v6l\" (UniqueName: \"kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l\") pod \"fc3bde80-85c4-436f-8741-d8c1248bdec8\" (UID: \"fc3bde80-85c4-436f-8741-d8c1248bdec8\") " Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.676678 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l" (OuterVolumeSpecName: "kube-api-access-f8v6l") pod "fc3bde80-85c4-436f-8741-d8c1248bdec8" (UID: "fc3bde80-85c4-436f-8741-d8c1248bdec8"). InnerVolumeSpecName "kube-api-access-f8v6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.766149 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8v6l\" (UniqueName: \"kubernetes.io/projected/fc3bde80-85c4-436f-8741-d8c1248bdec8-kube-api-access-f8v6l\") on node \"crc\" DevicePath \"\"" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.802196 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fc3bde80-85c4-436f-8741-d8c1248bdec8" (UID: "fc3bde80-85c4-436f-8741-d8c1248bdec8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:57:34 crc kubenswrapper[4735]: I0131 15:57:34.867333 4735 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc3bde80-85c4-436f-8741-d8c1248bdec8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 15:57:35 crc kubenswrapper[4735]: I0131 15:57:35.436123 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9jq2z_must-gather-bxn59_fc3bde80-85c4-436f-8741-d8c1248bdec8/copy/0.log" Jan 31 15:57:35 crc kubenswrapper[4735]: I0131 15:57:35.436851 4735 scope.go:117] "RemoveContainer" containerID="d981c648dbd7beb2f616bc7f9496ab3b2c43304c0fe29bdbf2b0ee9cea8b8d42" Jan 31 15:57:35 crc kubenswrapper[4735]: I0131 15:57:35.436907 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9jq2z/must-gather-bxn59" Jan 31 15:57:35 crc kubenswrapper[4735]: I0131 15:57:35.468844 4735 scope.go:117] "RemoveContainer" containerID="3a7fedc66b09c5c250abf7e0a810c22087d528eddc1c42525dd7dbe6e0e1c981" Jan 31 15:57:35 crc kubenswrapper[4735]: I0131 15:57:35.551620 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" path="/var/lib/kubelet/pods/fc3bde80-85c4-436f-8741-d8c1248bdec8/volumes" Jan 31 15:57:37 crc kubenswrapper[4735]: I0131 15:57:37.346055 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:57:37 crc kubenswrapper[4735]: I0131 15:57:37.346496 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:58:07 crc kubenswrapper[4735]: I0131 15:58:07.345657 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:58:07 crc kubenswrapper[4735]: I0131 15:58:07.346386 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:58:37 crc kubenswrapper[4735]: I0131 15:58:37.345909 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:58:37 crc kubenswrapper[4735]: I0131 15:58:37.346728 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:58:37 crc kubenswrapper[4735]: I0131 15:58:37.346836 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 15:58:37 crc kubenswrapper[4735]: I0131 15:58:37.348033 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:58:37 crc kubenswrapper[4735]: I0131 15:58:37.348163 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" gracePeriod=600 Jan 31 15:58:37 crc kubenswrapper[4735]: E0131 15:58:37.479599 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:58:38 crc kubenswrapper[4735]: I0131 15:58:38.153665 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" exitCode=0 Jan 31 15:58:38 crc kubenswrapper[4735]: I0131 15:58:38.153727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87"} Jan 31 15:58:38 crc kubenswrapper[4735]: I0131 15:58:38.153774 4735 scope.go:117] "RemoveContainer" containerID="5a758e81586bac0f3916459f1a03721aa5cc1e579d53b9084900982500be3e97" Jan 31 15:58:38 crc kubenswrapper[4735]: I0131 15:58:38.154716 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:58:38 crc kubenswrapper[4735]: E0131 15:58:38.156524 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:58:42 crc kubenswrapper[4735]: I0131 15:58:42.021361 4735 scope.go:117] "RemoveContainer" containerID="ca23ae4c50e5cac8dce8d6c5eb137aa7e59e300a35ab319345d27a6391fed110" Jan 31 15:58:51 crc kubenswrapper[4735]: I0131 15:58:51.540986 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:58:51 crc kubenswrapper[4735]: E0131 15:58:51.542570 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:59:03 crc kubenswrapper[4735]: I0131 15:59:03.541780 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:59:03 crc kubenswrapper[4735]: E0131 15:59:03.543334 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:59:16 crc kubenswrapper[4735]: I0131 15:59:16.540200 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:59:16 crc kubenswrapper[4735]: E0131 15:59:16.541281 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:59:28 crc kubenswrapper[4735]: I0131 15:59:28.540710 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:59:28 crc kubenswrapper[4735]: E0131 15:59:28.542096 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:59:40 crc kubenswrapper[4735]: I0131 15:59:40.541069 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:59:40 crc kubenswrapper[4735]: E0131 15:59:40.542271 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 15:59:52 crc kubenswrapper[4735]: I0131 15:59:52.540349 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 15:59:52 crc kubenswrapper[4735]: E0131 15:59:52.541632 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.193712 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s"] Jan 31 16:00:00 crc kubenswrapper[4735]: E0131 16:00:00.194725 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="gather" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.194742 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="gather" Jan 31 16:00:00 crc kubenswrapper[4735]: E0131 16:00:00.194766 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="registry-server" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.194775 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="registry-server" Jan 31 16:00:00 crc kubenswrapper[4735]: E0131 16:00:00.194795 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="extract-utilities" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.194803 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="extract-utilities" Jan 31 16:00:00 crc kubenswrapper[4735]: E0131 16:00:00.194821 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="copy" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.194828 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="copy" Jan 31 16:00:00 crc kubenswrapper[4735]: E0131 16:00:00.194842 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="extract-content" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.194850 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="extract-content" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.195057 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="copy" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.195076 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3bde80-85c4-436f-8741-d8c1248bdec8" containerName="gather" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.195095 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="427b57fc-e5dd-412c-9e1f-a57c43c15213" containerName="registry-server" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.195851 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.200383 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.200583 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.200773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdcx\" (UniqueName: \"kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.200865 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.201340 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.225621 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s"] Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.302911 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdcx\" (UniqueName: \"kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.302957 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.303077 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.304607 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.315743 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.325704 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdcx\" (UniqueName: \"kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx\") pod \"collect-profiles-29497920-zzj2s\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:00 crc kubenswrapper[4735]: I0131 16:00:00.531304 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:01 crc kubenswrapper[4735]: I0131 16:00:01.042579 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s"] Jan 31 16:00:01 crc kubenswrapper[4735]: I0131 16:00:01.075366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" event={"ID":"7fadd9c7-4b6d-4752-b338-3be9a90e39d8","Type":"ContainerStarted","Data":"65a8bebd735629205bc235f3c587ef071de54927cf57295980d1899f84c0d2a5"} Jan 31 16:00:02 crc kubenswrapper[4735]: I0131 16:00:02.091466 4735 generic.go:334] "Generic (PLEG): container finished" podID="7fadd9c7-4b6d-4752-b338-3be9a90e39d8" containerID="0ffae5d7ec7f55e7bd3eb1fea0320d923878cf1db3e67e308ffa6b8531386c42" exitCode=0 Jan 31 16:00:02 crc kubenswrapper[4735]: I0131 16:00:02.091530 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" event={"ID":"7fadd9c7-4b6d-4752-b338-3be9a90e39d8","Type":"ContainerDied","Data":"0ffae5d7ec7f55e7bd3eb1fea0320d923878cf1db3e67e308ffa6b8531386c42"} Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.437174 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.470408 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsdcx\" (UniqueName: \"kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx\") pod \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.470526 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume\") pod \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.470669 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume\") pod \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\" (UID: \"7fadd9c7-4b6d-4752-b338-3be9a90e39d8\") " Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.472105 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "7fadd9c7-4b6d-4752-b338-3be9a90e39d8" (UID: "7fadd9c7-4b6d-4752-b338-3be9a90e39d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.478645 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx" (OuterVolumeSpecName: "kube-api-access-vsdcx") pod "7fadd9c7-4b6d-4752-b338-3be9a90e39d8" (UID: "7fadd9c7-4b6d-4752-b338-3be9a90e39d8"). InnerVolumeSpecName "kube-api-access-vsdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.483860 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7fadd9c7-4b6d-4752-b338-3be9a90e39d8" (UID: "7fadd9c7-4b6d-4752-b338-3be9a90e39d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.572699 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.572735 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsdcx\" (UniqueName: \"kubernetes.io/projected/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-kube-api-access-vsdcx\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:03 crc kubenswrapper[4735]: I0131 16:00:03.572747 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fadd9c7-4b6d-4752-b338-3be9a90e39d8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.125393 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" event={"ID":"7fadd9c7-4b6d-4752-b338-3be9a90e39d8","Type":"ContainerDied","Data":"65a8bebd735629205bc235f3c587ef071de54927cf57295980d1899f84c0d2a5"} Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.125706 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a8bebd735629205bc235f3c587ef071de54927cf57295980d1899f84c0d2a5" Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.125739 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497920-zzj2s" Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.547026 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:00:04 crc kubenswrapper[4735]: E0131 16:00:04.547786 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.555741 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv"] Jan 31 16:00:04 crc kubenswrapper[4735]: I0131 16:00:04.562110 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-ffmvv"] Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.552529 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818e7c00-8672-44f1-8d47-4a2c2c7d6a3c" path="/var/lib/kubelet/pods/818e7c00-8672-44f1-8d47-4a2c2c7d6a3c/volumes" Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.913245 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:05 crc kubenswrapper[4735]: E0131 16:00:05.913628 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fadd9c7-4b6d-4752-b338-3be9a90e39d8" containerName="collect-profiles" Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.913645 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fadd9c7-4b6d-4752-b338-3be9a90e39d8" containerName="collect-profiles" Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.913834 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fadd9c7-4b6d-4752-b338-3be9a90e39d8" containerName="collect-profiles" Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.915105 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:05 crc kubenswrapper[4735]: I0131 16:00:05.938279 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.017935 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndn9v\" (UniqueName: \"kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.018004 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.018190 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.119999 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndn9v\" (UniqueName: \"kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.120098 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.120159 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.120890 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.120974 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.139955 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndn9v\" (UniqueName: \"kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v\") pod \"community-operators-v6dqk\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.259002 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:06 crc kubenswrapper[4735]: W0131 16:00:06.925688 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555d3ff2_0293_4c12_9427_af6df221bfc0.slice/crio-dc18ba00eac81b9aeb19f66417fedab0aa4c4a32db0d38cea82e2d3e2c28e267 WatchSource:0}: Error finding container dc18ba00eac81b9aeb19f66417fedab0aa4c4a32db0d38cea82e2d3e2c28e267: Status 404 returned error can't find the container with id dc18ba00eac81b9aeb19f66417fedab0aa4c4a32db0d38cea82e2d3e2c28e267 Jan 31 16:00:06 crc kubenswrapper[4735]: I0131 16:00:06.925800 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:07 crc kubenswrapper[4735]: I0131 16:00:07.168656 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerStarted","Data":"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720"} Jan 31 16:00:07 crc kubenswrapper[4735]: I0131 16:00:07.169006 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerStarted","Data":"dc18ba00eac81b9aeb19f66417fedab0aa4c4a32db0d38cea82e2d3e2c28e267"} Jan 31 16:00:08 crc kubenswrapper[4735]: I0131 16:00:08.179239 4735 generic.go:334] "Generic (PLEG): container finished" podID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerID="2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720" exitCode=0 Jan 31 16:00:08 crc kubenswrapper[4735]: I0131 16:00:08.179291 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerDied","Data":"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720"} Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.151632 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-86dxp/must-gather-grnzd"] Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.155697 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.158162 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-86dxp"/"default-dockercfg-5hbxh" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.158544 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-86dxp"/"kube-root-ca.crt" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.161810 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-86dxp"/"openshift-service-ca.crt" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.165183 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-86dxp/must-gather-grnzd"] Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.194947 4735 generic.go:334] "Generic (PLEG): container finished" podID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerID="522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd" exitCode=0 Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.194991 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerDied","Data":"522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd"} Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.287582 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8w5\" (UniqueName: \"kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.287634 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.389527 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq8w5\" (UniqueName: \"kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.389807 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.390311 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.412012 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq8w5\" (UniqueName: \"kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5\") pod \"must-gather-grnzd\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:09 crc kubenswrapper[4735]: I0131 16:00:09.473642 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:00:10 crc kubenswrapper[4735]: I0131 16:00:10.045258 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-86dxp/must-gather-grnzd"] Jan 31 16:00:10 crc kubenswrapper[4735]: I0131 16:00:10.205351 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerStarted","Data":"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc"} Jan 31 16:00:10 crc kubenswrapper[4735]: I0131 16:00:10.207968 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/must-gather-grnzd" event={"ID":"988570cf-cf0a-4b92-abe2-9661ac089142","Type":"ContainerStarted","Data":"85f9a72758eba1386246216af49f1d4e9b359df10f32e402536cf9cdf61441c9"} Jan 31 16:00:10 crc kubenswrapper[4735]: I0131 16:00:10.229300 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v6dqk" podStartSLOduration=2.761789178 podStartE2EDuration="5.229276147s" podCreationTimestamp="2026-01-31 16:00:05 +0000 UTC" firstStartedPulling="2026-01-31 16:00:07.17095727 +0000 UTC m=+3692.944286322" lastFinishedPulling="2026-01-31 16:00:09.638444239 +0000 UTC m=+3695.411773291" observedRunningTime="2026-01-31 16:00:10.219492701 +0000 UTC m=+3695.992821743" watchObservedRunningTime="2026-01-31 16:00:10.229276147 +0000 UTC m=+3696.002605189" Jan 31 16:00:11 crc kubenswrapper[4735]: I0131 16:00:11.221369 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/must-gather-grnzd" event={"ID":"988570cf-cf0a-4b92-abe2-9661ac089142","Type":"ContainerStarted","Data":"3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d"} Jan 31 16:00:11 crc kubenswrapper[4735]: I0131 16:00:11.221713 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/must-gather-grnzd" event={"ID":"988570cf-cf0a-4b92-abe2-9661ac089142","Type":"ContainerStarted","Data":"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9"} Jan 31 16:00:11 crc kubenswrapper[4735]: I0131 16:00:11.245902 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-86dxp/must-gather-grnzd" podStartSLOduration=2.245881725 podStartE2EDuration="2.245881725s" podCreationTimestamp="2026-01-31 16:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:00:11.240715839 +0000 UTC m=+3697.014044901" watchObservedRunningTime="2026-01-31 16:00:11.245881725 +0000 UTC m=+3697.019210777" Jan 31 16:00:13 crc kubenswrapper[4735]: I0131 16:00:13.826216 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-86dxp/crc-debug-gx7wc"] Jan 31 16:00:13 crc kubenswrapper[4735]: I0131 16:00:13.828175 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:13 crc kubenswrapper[4735]: I0131 16:00:13.981244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdcb\" (UniqueName: \"kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:13 crc kubenswrapper[4735]: I0131 16:00:13.981532 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.083517 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.083675 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdcb\" (UniqueName: \"kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.083694 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.122685 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdcb\" (UniqueName: \"kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb\") pod \"crc-debug-gx7wc\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.144408 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:14 crc kubenswrapper[4735]: I0131 16:00:14.248168 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" event={"ID":"c0733d36-3324-4f74-aad2-539616d2e9a9","Type":"ContainerStarted","Data":"9e017d2fb3327f943120da850a184279f795070b3363e304081d34cd756eaf04"} Jan 31 16:00:15 crc kubenswrapper[4735]: I0131 16:00:15.257323 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" event={"ID":"c0733d36-3324-4f74-aad2-539616d2e9a9","Type":"ContainerStarted","Data":"35a1e4fae8649094114a81acb1362861a63e18986489e8e324663e6273eb7c29"} Jan 31 16:00:15 crc kubenswrapper[4735]: I0131 16:00:15.273333 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" podStartSLOduration=2.27331417 podStartE2EDuration="2.27331417s" podCreationTimestamp="2026-01-31 16:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:00:15.270218443 +0000 UTC m=+3701.043547495" watchObservedRunningTime="2026-01-31 16:00:15.27331417 +0000 UTC m=+3701.046643222" Jan 31 16:00:16 crc kubenswrapper[4735]: I0131 16:00:16.260039 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:16 crc kubenswrapper[4735]: I0131 16:00:16.260469 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:16 crc kubenswrapper[4735]: I0131 16:00:16.320018 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:16 crc kubenswrapper[4735]: I0131 16:00:16.539907 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:00:16 crc kubenswrapper[4735]: E0131 16:00:16.540493 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:17 crc kubenswrapper[4735]: I0131 16:00:17.329852 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:17 crc kubenswrapper[4735]: I0131 16:00:17.384517 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.289676 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v6dqk" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="registry-server" containerID="cri-o://56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc" gracePeriod=2 Jan 31 16:00:19 crc kubenswrapper[4735]: E0131 16:00:19.486845 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555d3ff2_0293_4c12_9427_af6df221bfc0.slice/crio-56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555d3ff2_0293_4c12_9427_af6df221bfc0.slice/crio-conmon-56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc.scope\": RecentStats: unable to find data in memory cache]" Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.776294 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.903753 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndn9v\" (UniqueName: \"kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v\") pod \"555d3ff2-0293-4c12-9427-af6df221bfc0\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.903884 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content\") pod \"555d3ff2-0293-4c12-9427-af6df221bfc0\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.904018 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities\") pod \"555d3ff2-0293-4c12-9427-af6df221bfc0\" (UID: \"555d3ff2-0293-4c12-9427-af6df221bfc0\") " Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.905120 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities" (OuterVolumeSpecName: "utilities") pod "555d3ff2-0293-4c12-9427-af6df221bfc0" (UID: "555d3ff2-0293-4c12-9427-af6df221bfc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.925735 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v" (OuterVolumeSpecName: "kube-api-access-ndn9v") pod "555d3ff2-0293-4c12-9427-af6df221bfc0" (UID: "555d3ff2-0293-4c12-9427-af6df221bfc0"). InnerVolumeSpecName "kube-api-access-ndn9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:00:19 crc kubenswrapper[4735]: I0131 16:00:19.956331 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "555d3ff2-0293-4c12-9427-af6df221bfc0" (UID: "555d3ff2-0293-4c12-9427-af6df221bfc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.005987 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.006274 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/555d3ff2-0293-4c12-9427-af6df221bfc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.006286 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndn9v\" (UniqueName: \"kubernetes.io/projected/555d3ff2-0293-4c12-9427-af6df221bfc0-kube-api-access-ndn9v\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.451442 4735 generic.go:334] "Generic (PLEG): container finished" podID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerID="56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc" exitCode=0 Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.451481 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerDied","Data":"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc"} Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.451505 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6dqk" event={"ID":"555d3ff2-0293-4c12-9427-af6df221bfc0","Type":"ContainerDied","Data":"dc18ba00eac81b9aeb19f66417fedab0aa4c4a32db0d38cea82e2d3e2c28e267"} Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.451510 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6dqk" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.451522 4735 scope.go:117] "RemoveContainer" containerID="56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.486300 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.499349 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v6dqk"] Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.514113 4735 scope.go:117] "RemoveContainer" containerID="522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.544288 4735 scope.go:117] "RemoveContainer" containerID="2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.580884 4735 scope.go:117] "RemoveContainer" containerID="56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc" Jan 31 16:00:20 crc kubenswrapper[4735]: E0131 16:00:20.581242 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc\": container with ID starting with 56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc not found: ID does not exist" containerID="56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.581272 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc"} err="failed to get container status \"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc\": rpc error: code = NotFound desc = could not find container \"56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc\": container with ID starting with 56f95e6660205bd8c50715adc4c0990226c2dadfc7fba28f61cf1023fa6ed1cc not found: ID does not exist" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.581294 4735 scope.go:117] "RemoveContainer" containerID="522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd" Jan 31 16:00:20 crc kubenswrapper[4735]: E0131 16:00:20.581603 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd\": container with ID starting with 522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd not found: ID does not exist" containerID="522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.581635 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd"} err="failed to get container status \"522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd\": rpc error: code = NotFound desc = could not find container \"522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd\": container with ID starting with 522302eb09f09902d516f0fa34b4af966e5bc45acf9c5582a511fd6d63b699cd not found: ID does not exist" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.581655 4735 scope.go:117] "RemoveContainer" containerID="2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720" Jan 31 16:00:20 crc kubenswrapper[4735]: E0131 16:00:20.582021 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720\": container with ID starting with 2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720 not found: ID does not exist" containerID="2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720" Jan 31 16:00:20 crc kubenswrapper[4735]: I0131 16:00:20.582037 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720"} err="failed to get container status \"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720\": rpc error: code = NotFound desc = could not find container \"2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720\": container with ID starting with 2a44e514784bf4197b3155ba190384b304b6c637103d4add0d162fcffbf23720 not found: ID does not exist" Jan 31 16:00:21 crc kubenswrapper[4735]: I0131 16:00:21.551489 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" path="/var/lib/kubelet/pods/555d3ff2-0293-4c12-9427-af6df221bfc0/volumes" Jan 31 16:00:27 crc kubenswrapper[4735]: I0131 16:00:27.542251 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:00:27 crc kubenswrapper[4735]: E0131 16:00:27.542988 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:39 crc kubenswrapper[4735]: I0131 16:00:39.540291 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:00:39 crc kubenswrapper[4735]: E0131 16:00:39.541520 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:42 crc kubenswrapper[4735]: I0131 16:00:42.115993 4735 scope.go:117] "RemoveContainer" containerID="75eb9e7e00948789af2a28ea962431944a770f2668aa64da3a4b0fd0bd1ec88c" Jan 31 16:00:45 crc kubenswrapper[4735]: I0131 16:00:45.672447 4735 generic.go:334] "Generic (PLEG): container finished" podID="c0733d36-3324-4f74-aad2-539616d2e9a9" containerID="35a1e4fae8649094114a81acb1362861a63e18986489e8e324663e6273eb7c29" exitCode=0 Jan 31 16:00:45 crc kubenswrapper[4735]: I0131 16:00:45.672538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" event={"ID":"c0733d36-3324-4f74-aad2-539616d2e9a9","Type":"ContainerDied","Data":"35a1e4fae8649094114a81acb1362861a63e18986489e8e324663e6273eb7c29"} Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.810307 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.845670 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-gx7wc"] Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.855442 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-gx7wc"] Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.946556 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host\") pod \"c0733d36-3324-4f74-aad2-539616d2e9a9\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.946687 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host" (OuterVolumeSpecName: "host") pod "c0733d36-3324-4f74-aad2-539616d2e9a9" (UID: "c0733d36-3324-4f74-aad2-539616d2e9a9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.946770 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfdcb\" (UniqueName: \"kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb\") pod \"c0733d36-3324-4f74-aad2-539616d2e9a9\" (UID: \"c0733d36-3324-4f74-aad2-539616d2e9a9\") " Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.947193 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0733d36-3324-4f74-aad2-539616d2e9a9-host\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:46 crc kubenswrapper[4735]: I0131 16:00:46.952290 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb" (OuterVolumeSpecName: "kube-api-access-qfdcb") pod "c0733d36-3324-4f74-aad2-539616d2e9a9" (UID: "c0733d36-3324-4f74-aad2-539616d2e9a9"). InnerVolumeSpecName "kube-api-access-qfdcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:00:47 crc kubenswrapper[4735]: I0131 16:00:47.049268 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfdcb\" (UniqueName: \"kubernetes.io/projected/c0733d36-3324-4f74-aad2-539616d2e9a9-kube-api-access-qfdcb\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:47 crc kubenswrapper[4735]: I0131 16:00:47.551447 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0733d36-3324-4f74-aad2-539616d2e9a9" path="/var/lib/kubelet/pods/c0733d36-3324-4f74-aad2-539616d2e9a9/volumes" Jan 31 16:00:47 crc kubenswrapper[4735]: I0131 16:00:47.693596 4735 scope.go:117] "RemoveContainer" containerID="35a1e4fae8649094114a81acb1362861a63e18986489e8e324663e6273eb7c29" Jan 31 16:00:47 crc kubenswrapper[4735]: I0131 16:00:47.693650 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-gx7wc" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.115639 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mpxff"] Jan 31 16:00:48 crc kubenswrapper[4735]: E0131 16:00:48.116053 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="registry-server" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116067 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="registry-server" Jan 31 16:00:48 crc kubenswrapper[4735]: E0131 16:00:48.116092 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0733d36-3324-4f74-aad2-539616d2e9a9" containerName="container-00" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116100 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0733d36-3324-4f74-aad2-539616d2e9a9" containerName="container-00" Jan 31 16:00:48 crc kubenswrapper[4735]: E0131 16:00:48.116132 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="extract-content" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116141 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="extract-content" Jan 31 16:00:48 crc kubenswrapper[4735]: E0131 16:00:48.116151 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="extract-utilities" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116159 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="extract-utilities" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116365 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="555d3ff2-0293-4c12-9427-af6df221bfc0" containerName="registry-server" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.116393 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0733d36-3324-4f74-aad2-539616d2e9a9" containerName="container-00" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.117206 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.281942 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzlsc\" (UniqueName: \"kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.282283 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.384634 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzlsc\" (UniqueName: \"kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.384743 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.384908 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.403085 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzlsc\" (UniqueName: \"kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc\") pod \"crc-debug-mpxff\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.441561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:48 crc kubenswrapper[4735]: I0131 16:00:48.707867 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-mpxff" event={"ID":"be4e7ba0-6503-4d71-bb8b-d35c9502d155","Type":"ContainerStarted","Data":"942577a2bda745c48329c77879f1f51f19c8aadba48ca4a2b42e77102e98c429"} Jan 31 16:00:49 crc kubenswrapper[4735]: I0131 16:00:49.716677 4735 generic.go:334] "Generic (PLEG): container finished" podID="be4e7ba0-6503-4d71-bb8b-d35c9502d155" containerID="01cd9b1ab3e1ca35baecde0adf22e14defc9898c22d5f09eb4d8fc49c234b962" exitCode=0 Jan 31 16:00:49 crc kubenswrapper[4735]: I0131 16:00:49.716724 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-mpxff" event={"ID":"be4e7ba0-6503-4d71-bb8b-d35c9502d155","Type":"ContainerDied","Data":"01cd9b1ab3e1ca35baecde0adf22e14defc9898c22d5f09eb4d8fc49c234b962"} Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.152843 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mpxff"] Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.161601 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mpxff"] Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.821520 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.928469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzlsc\" (UniqueName: \"kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc\") pod \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.928827 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host\") pod \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\" (UID: \"be4e7ba0-6503-4d71-bb8b-d35c9502d155\") " Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.929534 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host" (OuterVolumeSpecName: "host") pod "be4e7ba0-6503-4d71-bb8b-d35c9502d155" (UID: "be4e7ba0-6503-4d71-bb8b-d35c9502d155"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:00:50 crc kubenswrapper[4735]: I0131 16:00:50.935727 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc" (OuterVolumeSpecName: "kube-api-access-lzlsc") pod "be4e7ba0-6503-4d71-bb8b-d35c9502d155" (UID: "be4e7ba0-6503-4d71-bb8b-d35c9502d155"). InnerVolumeSpecName "kube-api-access-lzlsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.031187 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be4e7ba0-6503-4d71-bb8b-d35c9502d155-host\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.031221 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzlsc\" (UniqueName: \"kubernetes.io/projected/be4e7ba0-6503-4d71-bb8b-d35c9502d155-kube-api-access-lzlsc\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.360322 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mlxcv"] Jan 31 16:00:51 crc kubenswrapper[4735]: E0131 16:00:51.360799 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4e7ba0-6503-4d71-bb8b-d35c9502d155" containerName="container-00" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.360824 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4e7ba0-6503-4d71-bb8b-d35c9502d155" containerName="container-00" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.361124 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4e7ba0-6503-4d71-bb8b-d35c9502d155" containerName="container-00" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.361887 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.438415 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8f49\" (UniqueName: \"kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.438828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.552542 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.552646 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8f49\" (UniqueName: \"kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.552689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.564388 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4e7ba0-6503-4d71-bb8b-d35c9502d155" path="/var/lib/kubelet/pods/be4e7ba0-6503-4d71-bb8b-d35c9502d155/volumes" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.571378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8f49\" (UniqueName: \"kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49\") pod \"crc-debug-mlxcv\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.677399 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.734654 4735 scope.go:117] "RemoveContainer" containerID="01cd9b1ab3e1ca35baecde0adf22e14defc9898c22d5f09eb4d8fc49c234b962" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.734668 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mpxff" Jan 31 16:00:51 crc kubenswrapper[4735]: I0131 16:00:51.735792 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" event={"ID":"8d970660-4cc2-4118-a0ed-3b7e29ca894f","Type":"ContainerStarted","Data":"316c3105229ae7d8275fc56d5ae4583f3050327834a5185b664962ee30207a74"} Jan 31 16:00:52 crc kubenswrapper[4735]: I0131 16:00:52.748347 4735 generic.go:334] "Generic (PLEG): container finished" podID="8d970660-4cc2-4118-a0ed-3b7e29ca894f" containerID="3e62a58bd435e8c06a8c5c88a5d6dd265bab4f2daa36670721552a8d879668d6" exitCode=0 Jan 31 16:00:52 crc kubenswrapper[4735]: I0131 16:00:52.748454 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" event={"ID":"8d970660-4cc2-4118-a0ed-3b7e29ca894f","Type":"ContainerDied","Data":"3e62a58bd435e8c06a8c5c88a5d6dd265bab4f2daa36670721552a8d879668d6"} Jan 31 16:00:52 crc kubenswrapper[4735]: I0131 16:00:52.788348 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mlxcv"] Jan 31 16:00:52 crc kubenswrapper[4735]: I0131 16:00:52.797324 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-86dxp/crc-debug-mlxcv"] Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.540401 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:00:53 crc kubenswrapper[4735]: E0131 16:00:53.540746 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.860977 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.993278 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8f49\" (UniqueName: \"kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49\") pod \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.993565 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host\") pod \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\" (UID: \"8d970660-4cc2-4118-a0ed-3b7e29ca894f\") " Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.993672 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host" (OuterVolumeSpecName: "host") pod "8d970660-4cc2-4118-a0ed-3b7e29ca894f" (UID: "8d970660-4cc2-4118-a0ed-3b7e29ca894f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 16:00:53 crc kubenswrapper[4735]: I0131 16:00:53.993925 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d970660-4cc2-4118-a0ed-3b7e29ca894f-host\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:54 crc kubenswrapper[4735]: I0131 16:00:54.005754 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49" (OuterVolumeSpecName: "kube-api-access-s8f49") pod "8d970660-4cc2-4118-a0ed-3b7e29ca894f" (UID: "8d970660-4cc2-4118-a0ed-3b7e29ca894f"). InnerVolumeSpecName "kube-api-access-s8f49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:00:54 crc kubenswrapper[4735]: I0131 16:00:54.096030 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8f49\" (UniqueName: \"kubernetes.io/projected/8d970660-4cc2-4118-a0ed-3b7e29ca894f-kube-api-access-s8f49\") on node \"crc\" DevicePath \"\"" Jan 31 16:00:54 crc kubenswrapper[4735]: I0131 16:00:54.765721 4735 scope.go:117] "RemoveContainer" containerID="3e62a58bd435e8c06a8c5c88a5d6dd265bab4f2daa36670721552a8d879668d6" Jan 31 16:00:54 crc kubenswrapper[4735]: I0131 16:00:54.765759 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/crc-debug-mlxcv" Jan 31 16:00:55 crc kubenswrapper[4735]: I0131 16:00:55.551192 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d970660-4cc2-4118-a0ed-3b7e29ca894f" path="/var/lib/kubelet/pods/8d970660-4cc2-4118-a0ed-3b7e29ca894f/volumes" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.173487 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497921-zzflt"] Jan 31 16:01:00 crc kubenswrapper[4735]: E0131 16:01:00.174682 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d970660-4cc2-4118-a0ed-3b7e29ca894f" containerName="container-00" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.174705 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d970660-4cc2-4118-a0ed-3b7e29ca894f" containerName="container-00" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.175013 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d970660-4cc2-4118-a0ed-3b7e29ca894f" containerName="container-00" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.175974 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.230324 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497921-zzflt"] Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.306780 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.306832 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfds\" (UniqueName: \"kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.306867 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.307168 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.408668 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.408808 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.409768 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.409800 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htfds\" (UniqueName: \"kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.418158 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.423213 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.430405 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htfds\" (UniqueName: \"kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.437787 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle\") pod \"keystone-cron-29497921-zzflt\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.532987 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:00 crc kubenswrapper[4735]: I0131 16:01:00.830925 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497921-zzflt"] Jan 31 16:01:01 crc kubenswrapper[4735]: I0131 16:01:01.840163 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497921-zzflt" event={"ID":"4fe23ca5-05b7-4bd7-948a-84e3b5c20861","Type":"ContainerStarted","Data":"e471b5176e1f923ea741edf546baf03bd4d83e2406a6876b909869609e107593"} Jan 31 16:01:01 crc kubenswrapper[4735]: I0131 16:01:01.840513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497921-zzflt" event={"ID":"4fe23ca5-05b7-4bd7-948a-84e3b5c20861","Type":"ContainerStarted","Data":"6f86a00a9b66dc16de80da463e43fe510f64cf5e9cd364e56be9c833821e2fca"} Jan 31 16:01:01 crc kubenswrapper[4735]: I0131 16:01:01.865166 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497921-zzflt" podStartSLOduration=1.865144129 podStartE2EDuration="1.865144129s" podCreationTimestamp="2026-01-31 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 16:01:01.864573743 +0000 UTC m=+3747.637902855" watchObservedRunningTime="2026-01-31 16:01:01.865144129 +0000 UTC m=+3747.638473181" Jan 31 16:01:03 crc kubenswrapper[4735]: I0131 16:01:03.858173 4735 generic.go:334] "Generic (PLEG): container finished" podID="4fe23ca5-05b7-4bd7-948a-84e3b5c20861" containerID="e471b5176e1f923ea741edf546baf03bd4d83e2406a6876b909869609e107593" exitCode=0 Jan 31 16:01:03 crc kubenswrapper[4735]: I0131 16:01:03.858201 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497921-zzflt" event={"ID":"4fe23ca5-05b7-4bd7-948a-84e3b5c20861","Type":"ContainerDied","Data":"e471b5176e1f923ea741edf546baf03bd4d83e2406a6876b909869609e107593"} Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.293508 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.310344 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfds\" (UniqueName: \"kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds\") pod \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.310641 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle\") pod \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.310799 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data\") pod \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.310852 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys\") pod \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\" (UID: \"4fe23ca5-05b7-4bd7-948a-84e3b5c20861\") " Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.316278 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds" (OuterVolumeSpecName: "kube-api-access-htfds") pod "4fe23ca5-05b7-4bd7-948a-84e3b5c20861" (UID: "4fe23ca5-05b7-4bd7-948a-84e3b5c20861"). InnerVolumeSpecName "kube-api-access-htfds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.329090 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4fe23ca5-05b7-4bd7-948a-84e3b5c20861" (UID: "4fe23ca5-05b7-4bd7-948a-84e3b5c20861"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.349659 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe23ca5-05b7-4bd7-948a-84e3b5c20861" (UID: "4fe23ca5-05b7-4bd7-948a-84e3b5c20861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.376779 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data" (OuterVolumeSpecName: "config-data") pod "4fe23ca5-05b7-4bd7-948a-84e3b5c20861" (UID: "4fe23ca5-05b7-4bd7-948a-84e3b5c20861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.412514 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.412746 4735 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.412813 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfds\" (UniqueName: \"kubernetes.io/projected/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-kube-api-access-htfds\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.412869 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe23ca5-05b7-4bd7-948a-84e3b5c20861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.880014 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497921-zzflt" event={"ID":"4fe23ca5-05b7-4bd7-948a-84e3b5c20861","Type":"ContainerDied","Data":"6f86a00a9b66dc16de80da463e43fe510f64cf5e9cd364e56be9c833821e2fca"} Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.880073 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f86a00a9b66dc16de80da463e43fe510f64cf5e9cd364e56be9c833821e2fca" Jan 31 16:01:05 crc kubenswrapper[4735]: I0131 16:01:05.880595 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497921-zzflt" Jan 31 16:01:06 crc kubenswrapper[4735]: I0131 16:01:06.539935 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:01:06 crc kubenswrapper[4735]: E0131 16:01:06.540570 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.634583 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:16 crc kubenswrapper[4735]: E0131 16:01:16.635646 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe23ca5-05b7-4bd7-948a-84e3b5c20861" containerName="keystone-cron" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.635663 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe23ca5-05b7-4bd7-948a-84e3b5c20861" containerName="keystone-cron" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.635894 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe23ca5-05b7-4bd7-948a-84e3b5c20861" containerName="keystone-cron" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.637513 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.647168 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.713785 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghn4\" (UniqueName: \"kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.714144 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.714269 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.815528 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.816051 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.815985 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.816329 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.816519 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghn4\" (UniqueName: \"kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.842417 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghn4\" (UniqueName: \"kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4\") pod \"redhat-marketplace-p4266\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:16 crc kubenswrapper[4735]: I0131 16:01:16.955695 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:17 crc kubenswrapper[4735]: I0131 16:01:17.407420 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:17 crc kubenswrapper[4735]: I0131 16:01:17.984984 4735 generic.go:334] "Generic (PLEG): container finished" podID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerID="21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73" exitCode=0 Jan 31 16:01:17 crc kubenswrapper[4735]: I0131 16:01:17.985144 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerDied","Data":"21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73"} Jan 31 16:01:17 crc kubenswrapper[4735]: I0131 16:01:17.985316 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerStarted","Data":"a83c9815c5575e769bd0da4af25df75b43ad41a70dffcd2ef82c327998165f39"} Jan 31 16:01:18 crc kubenswrapper[4735]: I0131 16:01:18.994656 4735 generic.go:334] "Generic (PLEG): container finished" podID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerID="d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c" exitCode=0 Jan 31 16:01:18 crc kubenswrapper[4735]: I0131 16:01:18.994881 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerDied","Data":"d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c"} Jan 31 16:01:20 crc kubenswrapper[4735]: I0131 16:01:20.005995 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerStarted","Data":"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3"} Jan 31 16:01:20 crc kubenswrapper[4735]: I0131 16:01:20.022530 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4266" podStartSLOduration=2.589004425 podStartE2EDuration="4.022508714s" podCreationTimestamp="2026-01-31 16:01:16 +0000 UTC" firstStartedPulling="2026-01-31 16:01:17.98764424 +0000 UTC m=+3763.760973282" lastFinishedPulling="2026-01-31 16:01:19.421148499 +0000 UTC m=+3765.194477571" observedRunningTime="2026-01-31 16:01:20.020947899 +0000 UTC m=+3765.794276971" watchObservedRunningTime="2026-01-31 16:01:20.022508714 +0000 UTC m=+3765.795837766" Jan 31 16:01:21 crc kubenswrapper[4735]: I0131 16:01:21.540466 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:01:21 crc kubenswrapper[4735]: E0131 16:01:21.541300 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:01:25 crc kubenswrapper[4735]: I0131 16:01:25.975748 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-844d5857fb-gs56h_89ea2bff-49c7-4b54-a026-c7c632da1b0c/barbican-api/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.127041 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-844d5857fb-gs56h_89ea2bff-49c7-4b54-a026-c7c632da1b0c/barbican-api-log/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.154601 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c7cb96f6b-ctvpf_df8f8f18-32ae-4729-9e50-304d7dfdbf07/barbican-keystone-listener/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.221310 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c7cb96f6b-ctvpf_df8f8f18-32ae-4729-9e50-304d7dfdbf07/barbican-keystone-listener-log/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.511209 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66c7c44665-k447s_c8df5434-b30b-49b1-9130-b152a98f3af0/barbican-worker-log/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.540201 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66c7c44665-k447s_c8df5434-b30b-49b1-9130-b152a98f3af0/barbican-worker/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.661135 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lkft7_8b16424b-3400-4f1a-931f-f0a2a398859c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.772782 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/ceilometer-central-agent/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.840338 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/ceilometer-notification-agent/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.848928 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/proxy-httpd/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.939202 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3025f8d1-e415-4db4-815a-97b6ef8d09dc/sg-core/0.log" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.956687 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:26 crc kubenswrapper[4735]: I0131 16:01:26.956750 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.008172 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.041715 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f/cinder-api-log/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.077525 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c6e6f6c4-12bb-43d4-a77c-2ba5c2b79f1f/cinder-api/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.120491 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.211058 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_475cce0c-6c29-41e3-8c56-b5368f1b9e92/probe/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.242731 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.382412 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_475cce0c-6c29-41e3-8c56-b5368f1b9e92/cinder-scheduler/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.433805 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hspmq_a56e09c3-9ece-4ce0-9a98-1d97bd4b24d7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.511539 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vfzc9_1898e10e-7cb4-453e-84f8-ee45e1b109a3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.630556 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/init/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.820670 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n87t9_f3889565-cd9c-4a0a-80d5-09bc3f0e0a83/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.841678 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/init/0.log" Jan 31 16:01:27 crc kubenswrapper[4735]: I0131 16:01:27.888176 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-gkqk4_d4578674-5cf7-4382-811e-fe1cef58fff2/dnsmasq-dns/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.060571 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5620f33b-a10a-41ae-a9f2-707f94ebbe59/glance-httpd/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.072963 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5620f33b-a10a-41ae-a9f2-707f94ebbe59/glance-log/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.223819 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fdceee3d-5a28-4c46-bd6e-40048cdd56c9/glance-httpd/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.251737 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fdceee3d-5a28-4c46-bd6e-40048cdd56c9/glance-log/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.335841 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-784979f994-vtd4m_c022909b-46cd-4e9d-851e-483e23358bd8/horizon/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.598758 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wf6kf_2cfb222c-2a44-4521-af34-3d352b0cfdea/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.672872 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-784979f994-vtd4m_c022909b-46cd-4e9d-851e-483e23358bd8/horizon-log/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.751580 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-964lq_7737dd79-8b1c-448a-a81d-5a06b58e32e1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.935614 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b94ccc6d9-2fktc_3d196991-4f4f-4bb3-a113-b33659619f09/keystone-api/0.log" Jan 31 16:01:28 crc kubenswrapper[4735]: I0131 16:01:28.971474 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29497921-zzflt_4fe23ca5-05b7-4bd7-948a-84e3b5c20861/keystone-cron/0.log" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.089617 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4266" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="registry-server" containerID="cri-o://f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3" gracePeriod=2 Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.109158 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_336b74ed-3ea3-4963-9497-a02b65b80a3e/kube-state-metrics/0.log" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.174988 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b49fw_dfa49e11-fd8b-4933-b184-a524c747ee02/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.504286 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.560634 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities\") pod \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.560741 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content\") pod \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.560778 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xghn4\" (UniqueName: \"kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4\") pod \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\" (UID: \"c10fa8b2-7499-4ba4-84aa-6c9322956dfb\") " Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.562669 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities" (OuterVolumeSpecName: "utilities") pod "c10fa8b2-7499-4ba4-84aa-6c9322956dfb" (UID: "c10fa8b2-7499-4ba4-84aa-6c9322956dfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.581753 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4" (OuterVolumeSpecName: "kube-api-access-xghn4") pod "c10fa8b2-7499-4ba4-84aa-6c9322956dfb" (UID: "c10fa8b2-7499-4ba4-84aa-6c9322956dfb"). InnerVolumeSpecName "kube-api-access-xghn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.586608 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cd5d5cd9-8xjdv_e70f9259-db47-4290-9778-8bf2849a809a/neutron-api/0.log" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.596743 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c10fa8b2-7499-4ba4-84aa-6c9322956dfb" (UID: "c10fa8b2-7499-4ba4-84aa-6c9322956dfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.662636 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.662665 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.662675 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xghn4\" (UniqueName: \"kubernetes.io/projected/c10fa8b2-7499-4ba4-84aa-6c9322956dfb-kube-api-access-xghn4\") on node \"crc\" DevicePath \"\"" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.861553 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-74cd5d5cd9-8xjdv_e70f9259-db47-4290-9778-8bf2849a809a/neutron-httpd/0.log" Jan 31 16:01:29 crc kubenswrapper[4735]: I0131 16:01:29.935197 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9lkrg_bdcd1840-f7b6-41b8-bad9-43e441f1cad9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.099553 4735 generic.go:334] "Generic (PLEG): container finished" podID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerID="f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3" exitCode=0 Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.099589 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerDied","Data":"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3"} Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.099616 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4266" event={"ID":"c10fa8b2-7499-4ba4-84aa-6c9322956dfb","Type":"ContainerDied","Data":"a83c9815c5575e769bd0da4af25df75b43ad41a70dffcd2ef82c327998165f39"} Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.099633 4735 scope.go:117] "RemoveContainer" containerID="f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.099761 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4266" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.120687 4735 scope.go:117] "RemoveContainer" containerID="d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.146289 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.153261 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4266"] Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.157540 4735 scope.go:117] "RemoveContainer" containerID="21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.201984 4735 scope.go:117] "RemoveContainer" containerID="f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3" Jan 31 16:01:30 crc kubenswrapper[4735]: E0131 16:01:30.205180 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3\": container with ID starting with f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3 not found: ID does not exist" containerID="f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.205211 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3"} err="failed to get container status \"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3\": rpc error: code = NotFound desc = could not find container \"f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3\": container with ID starting with f15a84c6945d03ab3a502d3f24a3e4d09fd14fd58d770d7077371f5ce35fa5f3 not found: ID does not exist" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.205232 4735 scope.go:117] "RemoveContainer" containerID="d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c" Jan 31 16:01:30 crc kubenswrapper[4735]: E0131 16:01:30.207969 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c\": container with ID starting with d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c not found: ID does not exist" containerID="d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.208013 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c"} err="failed to get container status \"d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c\": rpc error: code = NotFound desc = could not find container \"d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c\": container with ID starting with d13ebe9538cde0b7c664d32e1131bfa3adc8fc2e2b21021bda1b771bc1f78c1c not found: ID does not exist" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.208042 4735 scope.go:117] "RemoveContainer" containerID="21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73" Jan 31 16:01:30 crc kubenswrapper[4735]: E0131 16:01:30.208670 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73\": container with ID starting with 21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73 not found: ID does not exist" containerID="21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.208698 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73"} err="failed to get container status \"21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73\": rpc error: code = NotFound desc = could not find container \"21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73\": container with ID starting with 21369b071ec45871fd844ef5311db9b0983a9225c636bb839b65284cfefbcf73 not found: ID does not exist" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.459859 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7fb482d0-dc39-4b62-81e7-c680dc211c0b/nova-api-log/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.535949 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2e5c1a1c-5aa4-428a-8729-77ce2cb81992/nova-cell0-conductor-conductor/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.760017 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1a55c776-7b88-4c64-a106-b2f8619425a7/nova-cell1-conductor-conductor/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.850716 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7fb482d0-dc39-4b62-81e7-c680dc211c0b/nova-api-api/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.911971 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ecaa245-bfd5-42b9-b10f-117b0dfef5cb/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 16:01:30 crc kubenswrapper[4735]: I0131 16:01:30.984654 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-grs6l_c6cf70ec-3fc5-4144-8756-bcf0a8704416/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.207294 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_df24097c-68e3-4bbc-b56b-cc19e5e91ea6/nova-metadata-log/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.455670 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/mysql-bootstrap/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.525899 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_622b97c5-deed-459f-a9a1-c407b424f921/nova-scheduler-scheduler/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.557540 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" path="/var/lib/kubelet/pods/c10fa8b2-7499-4ba4-84aa-6c9322956dfb/volumes" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.639986 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/mysql-bootstrap/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.687202 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5616e745-0304-4987-bc98-aaa42fc5f6ea/galera/0.log" Jan 31 16:01:31 crc kubenswrapper[4735]: I0131 16:01:31.988142 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/mysql-bootstrap/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.121986 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/galera/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.176170 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8f2fd0fe-2906-4934-b08b-27032a482331/mysql-bootstrap/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.361631 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75603184-bd90-47b2-a5e2-c06e0c205001/openstackclient/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.407750 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_df24097c-68e3-4bbc-b56b-cc19e5e91ea6/nova-metadata-metadata/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.430004 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2vhbk_07524504-28f6-44cc-8630-2e736f87ff3d/ovn-controller/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.647756 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fjc7w_ce44083a-52e6-45b6-bd3f-90ae832c54fa/openstack-network-exporter/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.707976 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server-init/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.864492 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovs-vswitchd/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.884718 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server-init/0.log" Jan 31 16:01:32 crc kubenswrapper[4735]: I0131 16:01:32.960857 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v8dt8_bc0eebe3-8b72-4599-b6f6-ba54f3836563/ovsdb-server/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.155549 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bbe4f564-44b1-441d-aed3-b08ad06141c6/openstack-network-exporter/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.183225 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rq2xl_67db3c53-552c-458d-b333-09ad7b0f0447/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.235701 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bbe4f564-44b1-441d-aed3-b08ad06141c6/ovn-northd/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.485137 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_667db586-48c3-4b33-8e39-eb27c45d7841/openstack-network-exporter/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.550623 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_667db586-48c3-4b33-8e39-eb27c45d7841/ovsdbserver-nb/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.713843 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7ff11e61-5fe3-474b-ac0d-8a89a364de0e/openstack-network-exporter/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.749269 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7ff11e61-5fe3-474b-ac0d-8a89a364de0e/ovsdbserver-sb/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.887660 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-766f55bc7b-w8qbt_a0e33520-34a2-4009-9f61-7b6211fa8744/placement-api/0.log" Jan 31 16:01:33 crc kubenswrapper[4735]: I0131 16:01:33.965332 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-766f55bc7b-w8qbt_a0e33520-34a2-4009-9f61-7b6211fa8744/placement-log/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.028965 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/setup-container/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.220360 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/setup-container/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.239152 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/rabbitmq/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.286398 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_43595acf-df41-4c13-8d02-35d62877fecc/setup-container/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.511783 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/setup-container/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.543062 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9569e461-f5f7-4a24-a8d9-7f67e8f46b04/rabbitmq/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.584097 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4w6tj_0e45ff90-974c-42ba-986c-9303f5cde30f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.777347 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9thw4_6468a09d-c7d4-428a-bfa1-50c28830f709/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:34 crc kubenswrapper[4735]: I0131 16:01:34.879527 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gcqnm_2bdb9dbd-b178-43f6-985a-1b19f40820cd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.005581 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8rvd2_c882b7b4-b823-41c7-98cd-862f19262e18/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.090523 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zlgxw_ddf071a4-728c-470f-829d-c905a4b60f9d/ssh-known-hosts-edpm-deployment/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.338100 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f4f968b5f-slb77_bdf1b1c9-1210-4c8f-beba-1780efc67349/proxy-server/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.348104 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f4f968b5f-slb77_bdf1b1c9-1210-4c8f-beba-1780efc67349/proxy-httpd/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.385965 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6np6m_83dc2b98-a7e1-4654-95cf-fd37532fa571/swift-ring-rebalance/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.551025 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:01:35 crc kubenswrapper[4735]: E0131 16:01:35.551275 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.591063 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-auditor/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.626091 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-reaper/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.654858 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-replicator/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.783173 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-auditor/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.811330 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/account-server/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.849396 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-server/0.log" Jan 31 16:01:35 crc kubenswrapper[4735]: I0131 16:01:35.867834 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-replicator/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.019855 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/container-updater/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.058114 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-auditor/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.113277 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-expirer/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.139767 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-replicator/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.245253 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-server/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.272791 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/object-updater/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.305936 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/rsync/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.352059 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5ca8c406-9ce6-427d-94ab-293bb0cb4c86/swift-recon-cron/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.554632 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cdsth_ad304d37-c310-4b94-b535-b75f3ee49e81/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.590037 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3fc05b7f-c3c5-4fd8-807e-eb2d83710f4f/tempest-tests-tempest-tests-runner/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.721728 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9bcab996-865d-4021-bd63-7e17f6093145/test-operator-logs-container/0.log" Jan 31 16:01:36 crc kubenswrapper[4735]: I0131 16:01:36.873901 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-sg76l_a02498a0-04e3-4062-b19f-aa22ab9544a3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 16:01:46 crc kubenswrapper[4735]: I0131 16:01:46.777769 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bb53ef9b-e389-4e78-a677-5def022eab7e/memcached/0.log" Jan 31 16:01:48 crc kubenswrapper[4735]: I0131 16:01:48.540665 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:01:48 crc kubenswrapper[4735]: E0131 16:01:48.541106 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:01 crc kubenswrapper[4735]: I0131 16:02:01.540256 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:02:01 crc kubenswrapper[4735]: E0131 16:02:01.541017 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.411061 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.531780 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.562955 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.610938 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.721441 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/util/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.747196 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/extract/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.753436 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97afed73b5481e079b62eec4fd5d3403afe9d2fad32690bd7845d999a78hqfz_2bcbc1aa-0b7f-4aa4-a553-25427de0a734/pull/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.946815 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-qh6xs_6cc9c424-b3f7-4744-92d8-5844915879bf/manager/0.log" Jan 31 16:02:05 crc kubenswrapper[4735]: I0131 16:02:05.978259 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-787499fbb-drsgx_a27712fb-eb89-49ff-b5a5-1432a0a4774f/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.091120 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-bxf2k_d0a68002-1422-44d3-8656-2901a42b42f4/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.212827 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64d858bbbd-k4bh2_e82a74c5-fe5d-4e1a-9e6e-7189ac1465e2/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.306333 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-r7xlm_c4915a12-75dc-4b2e-a039-c98287c8cec4/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.395853 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-bb6l7_627cef1f-bb76-4dd2-b7d1-b3f55bdeb335/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.566415 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-87bd9d46f-hzrws_ef1d6c4a-a87c-4afc-9b80-65c5370a3c5d/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.726465 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-66z2p_8d42c163-9e7d-485f-b94e-4796166ba8f9/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.803293 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-rfrcn_0aef4ea8-3e5e-497e-b2bd-280d521e895f/manager/0.log" Jan 31 16:02:06 crc kubenswrapper[4735]: I0131 16:02:06.883450 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7d96d95959-jxthf_f4b1920b-1fb0-4f10-a3fc-97d19aacc34e/manager/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.008592 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-7v4mn_5926394d-8ab0-46d7-9bb6-1ea59a0d7511/manager/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.121744 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-rtzcv_c814b622-e60d-492c-ae86-9e78b37297e4/manager/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.211621 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-f8h8s_f2610081-f50c-441f-8b8a-bc2a236065f1/manager/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.391858 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dcz2lk_437ef1c6-09b5-45c2-b88d-e42e432ae801/manager/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.675114 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b494f4958-8qvfp_7043e467-d103-458b-a498-c110f06809f1/operator/0.log" Jan 31 16:02:07 crc kubenswrapper[4735]: I0131 16:02:07.862120 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pp8kj_5a0ed87d-afcc-44e0-a590-4f56b4338cb7/registry-server/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.140300 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-thcqg_8bc95764-b0cb-4206-af35-fefb00d8c71f/manager/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.372357 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-zgmmx_bc56f00a-31c6-474b-af93-59442f956567/manager/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.490049 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b89ddb58-f8f64_b469fe09-816f-4ffa-a61d-82e448011837/manager/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.598288 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lmv8c_9ba40dc8-290a-4a40-a039-609874c181d4/operator/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.756136 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76864d4fdb-ps2jp_1cd70b29-6ef8-4625-93eb-f7113200b385/manager/0.log" Jan 31 16:02:08 crc kubenswrapper[4735]: I0131 16:02:08.985714 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-55f4d66b54-gcks8_3808aa6d-1386-4e9a-81b2-e37c11246170/manager/0.log" Jan 31 16:02:09 crc kubenswrapper[4735]: I0131 16:02:09.024345 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-dlpt5_57aa7be3-f130-41b7-a400-1c2ddd1b8ce3/manager/0.log" Jan 31 16:02:09 crc kubenswrapper[4735]: I0131 16:02:09.035038 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-8446785844-jtbmg_80924e89-7cef-4879-b955-28d3ef271729/manager/0.log" Jan 31 16:02:09 crc kubenswrapper[4735]: I0131 16:02:09.184844 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-pqmvf_0c4b1bae-6cff-4914-907c-f6c9867a803b/manager/0.log" Jan 31 16:02:12 crc kubenswrapper[4735]: I0131 16:02:12.539984 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:02:12 crc kubenswrapper[4735]: E0131 16:02:12.540891 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:23 crc kubenswrapper[4735]: I0131 16:02:23.540071 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:02:23 crc kubenswrapper[4735]: E0131 16:02:23.541162 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:28 crc kubenswrapper[4735]: I0131 16:02:28.948101 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nf8zl_52d5f4fc-bb86-426a-b56e-810e4ffc1315/control-plane-machine-set-operator/0.log" Jan 31 16:02:29 crc kubenswrapper[4735]: I0131 16:02:29.088221 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9q4gk_d4340fb1-7455-4140-9c75-2d075ea0306c/kube-rbac-proxy/0.log" Jan 31 16:02:29 crc kubenswrapper[4735]: I0131 16:02:29.117576 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9q4gk_d4340fb1-7455-4140-9c75-2d075ea0306c/machine-api-operator/0.log" Jan 31 16:02:36 crc kubenswrapper[4735]: I0131 16:02:36.540166 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:02:36 crc kubenswrapper[4735]: E0131 16:02:36.541014 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:42 crc kubenswrapper[4735]: I0131 16:02:42.755336 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2x2bm_1715eb23-6cf5-4a8f-9d53-11fae6b38859/cert-manager-controller/0.log" Jan 31 16:02:42 crc kubenswrapper[4735]: I0131 16:02:42.975037 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-25p6r_7032d6de-d341-4686-a7c9-f470bf8237cb/cert-manager-webhook/0.log" Jan 31 16:02:42 crc kubenswrapper[4735]: I0131 16:02:42.979074 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5t4c2_3cca71ec-1c99-4260-9208-9e4202ff3e3e/cert-manager-cainjector/0.log" Jan 31 16:02:50 crc kubenswrapper[4735]: I0131 16:02:50.541128 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:02:50 crc kubenswrapper[4735]: E0131 16:02:50.541972 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:02:56 crc kubenswrapper[4735]: I0131 16:02:56.666888 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-4r6xg_ecdc44de-8e1b-477a-860f-780a279594cc/nmstate-console-plugin/0.log" Jan 31 16:02:56 crc kubenswrapper[4735]: I0131 16:02:56.836259 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hs75p_c18ba473-8399-4059-a6c4-22990f6e1cfe/nmstate-handler/0.log" Jan 31 16:02:56 crc kubenswrapper[4735]: I0131 16:02:56.901965 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6cbvj_69c76992-f3a3-4e9a-bc71-0eb6a7852b6e/kube-rbac-proxy/0.log" Jan 31 16:02:56 crc kubenswrapper[4735]: I0131 16:02:56.928512 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6cbvj_69c76992-f3a3-4e9a-bc71-0eb6a7852b6e/nmstate-metrics/0.log" Jan 31 16:02:57 crc kubenswrapper[4735]: I0131 16:02:57.048280 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9rvf2_a11f4f28-7d8c-439b-9e8a-903060113cf4/nmstate-operator/0.log" Jan 31 16:02:57 crc kubenswrapper[4735]: I0131 16:02:57.164887 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lztrk_cd12d118-a925-4765-a3c4-38e34aa3c548/nmstate-webhook/0.log" Jan 31 16:03:05 crc kubenswrapper[4735]: I0131 16:03:05.554987 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:03:05 crc kubenswrapper[4735]: E0131 16:03:05.555888 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:03:17 crc kubenswrapper[4735]: I0131 16:03:17.540313 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:03:17 crc kubenswrapper[4735]: E0131 16:03:17.541161 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:03:26 crc kubenswrapper[4735]: I0131 16:03:26.842592 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-j9l75_129f74a8-c107-4f72-9972-d2c81e811b93/kube-rbac-proxy/0.log" Jan 31 16:03:26 crc kubenswrapper[4735]: I0131 16:03:26.877216 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-j9l75_129f74a8-c107-4f72-9972-d2c81e811b93/controller/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.016287 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.204485 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.210056 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.213356 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.218781 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.419467 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.420721 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.428528 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.445753 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.632706 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-reloader/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.635734 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/controller/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.656552 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-metrics/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.680880 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/cp-frr-files/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.808246 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/frr-metrics/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.835955 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/kube-rbac-proxy/0.log" Jan 31 16:03:27 crc kubenswrapper[4735]: I0131 16:03:27.882829 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/kube-rbac-proxy-frr/0.log" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.065308 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/reloader/0.log" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.134564 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-849x9_441f5a71-b5fa-4f6f-a825-40eb055760a0/frr-k8s-webhook-server/0.log" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.255426 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-866454bfd-gxbsb_0f2b4446-8543-4182-bf59-d1be74b899c9/manager/0.log" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.456729 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-54d6fb8967-d6d4c_f84a6826-6439-4751-a46e-84a04759c021/webhook-server/0.log" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.539990 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:03:28 crc kubenswrapper[4735]: E0131 16:03:28.540227 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gq77t_openshift-machine-config-operator(582442e0-b079-476d-849d-a4902306aba0)\"" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" Jan 31 16:03:28 crc kubenswrapper[4735]: I0131 16:03:28.554043 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7cgt_38acb809-064d-43d6-8800-40cd1cf7f89a/kube-rbac-proxy/0.log" Jan 31 16:03:29 crc kubenswrapper[4735]: I0131 16:03:29.042028 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v7cgt_38acb809-064d-43d6-8800-40cd1cf7f89a/speaker/0.log" Jan 31 16:03:29 crc kubenswrapper[4735]: I0131 16:03:29.215083 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w4xx8_651b0e17-dd6c-438b-a213-f4fd1da48cae/frr/0.log" Jan 31 16:03:41 crc kubenswrapper[4735]: I0131 16:03:41.541754 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:03:42 crc kubenswrapper[4735]: I0131 16:03:42.416391 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"0ab98c14ed86b153bf8199d581ef19375cfd78ddd48d4bb06f4b572d0b5af3ff"} Jan 31 16:03:42 crc kubenswrapper[4735]: I0131 16:03:42.782870 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 16:03:42 crc kubenswrapper[4735]: I0131 16:03:42.956461 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 16:03:42 crc kubenswrapper[4735]: I0131 16:03:42.973697 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.019341 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.140558 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/util/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.159864 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.174853 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dchd8gd_19a8ca6f-2c54-4006-8e89-4aa7bde7e254/extract/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.368235 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.503353 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.506004 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.519804 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.746617 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/util/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.747623 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/pull/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.786070 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713w6m4z_691e8099-7fa0-4462-857f-6bbfab6502bc/extract/0.log" Jan 31 16:03:43 crc kubenswrapper[4735]: I0131 16:03:43.921815 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.069269 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.123308 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.139811 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.348699 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-utilities/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.396999 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/extract-content/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.596802 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.744827 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.773435 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.781269 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wvwvj_8129b1bd-e5c2-4d3c-b631-b983b1a424c4/registry-server/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.797334 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 16:03:44 crc kubenswrapper[4735]: I0131 16:03:44.994087 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-utilities/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.011253 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/extract-content/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.232671 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5l67c_26db5009-aa32-4023-88bf-05ba79d4d907/marketplace-operator/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.251916 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.554473 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hw9dp_5d432da9-d868-493b-be4c-3cb7c8b9899e/registry-server/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.561359 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.619646 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.625116 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.784064 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-utilities/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.815411 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/extract-content/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.948098 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r7rmb_bd27f576-2b59-40db-b27f-b41422fdeea3/registry-server/0.log" Jan 31 16:03:45 crc kubenswrapper[4735]: I0131 16:03:45.994445 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.145147 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.172069 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.198998 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.336525 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-utilities/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.351478 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/extract-content/0.log" Jan 31 16:03:46 crc kubenswrapper[4735]: I0131 16:03:46.687093 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6c52m_dd10e32d-9552-474f-abfa-5a15cb41d654/registry-server/0.log" Jan 31 16:04:10 crc kubenswrapper[4735]: E0131 16:04:10.289361 4735 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.241:54708->38.102.83.241:38007: read tcp 38.102.83.241:54708->38.102.83.241:38007: read: connection reset by peer Jan 31 16:04:10 crc kubenswrapper[4735]: E0131 16:04:10.305694 4735 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.241:54682->38.102.83.241:38007: write tcp 38.102.83.241:54682->38.102.83.241:38007: write: broken pipe Jan 31 16:05:27 crc kubenswrapper[4735]: I0131 16:05:27.632631 4735 generic.go:334] "Generic (PLEG): container finished" podID="988570cf-cf0a-4b92-abe2-9661ac089142" containerID="2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9" exitCode=0 Jan 31 16:05:27 crc kubenswrapper[4735]: I0131 16:05:27.633253 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-86dxp/must-gather-grnzd" event={"ID":"988570cf-cf0a-4b92-abe2-9661ac089142","Type":"ContainerDied","Data":"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9"} Jan 31 16:05:27 crc kubenswrapper[4735]: I0131 16:05:27.634152 4735 scope.go:117] "RemoveContainer" containerID="2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9" Jan 31 16:05:27 crc kubenswrapper[4735]: I0131 16:05:27.737901 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-86dxp_must-gather-grnzd_988570cf-cf0a-4b92-abe2-9661ac089142/gather/0.log" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.059200 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-86dxp/must-gather-grnzd"] Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.059948 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-86dxp/must-gather-grnzd" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="copy" containerID="cri-o://3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d" gracePeriod=2 Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.070831 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-86dxp/must-gather-grnzd"] Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.549839 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-86dxp_must-gather-grnzd_988570cf-cf0a-4b92-abe2-9661ac089142/copy/0.log" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.550753 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.655700 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq8w5\" (UniqueName: \"kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5\") pod \"988570cf-cf0a-4b92-abe2-9661ac089142\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.655825 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output\") pod \"988570cf-cf0a-4b92-abe2-9661ac089142\" (UID: \"988570cf-cf0a-4b92-abe2-9661ac089142\") " Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.660933 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5" (OuterVolumeSpecName: "kube-api-access-hq8w5") pod "988570cf-cf0a-4b92-abe2-9661ac089142" (UID: "988570cf-cf0a-4b92-abe2-9661ac089142"). InnerVolumeSpecName "kube-api-access-hq8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.747474 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-86dxp_must-gather-grnzd_988570cf-cf0a-4b92-abe2-9661ac089142/copy/0.log" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.747750 4735 generic.go:334] "Generic (PLEG): container finished" podID="988570cf-cf0a-4b92-abe2-9661ac089142" containerID="3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d" exitCode=143 Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.747798 4735 scope.go:117] "RemoveContainer" containerID="3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.747918 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-86dxp/must-gather-grnzd" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.757860 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq8w5\" (UniqueName: \"kubernetes.io/projected/988570cf-cf0a-4b92-abe2-9661ac089142-kube-api-access-hq8w5\") on node \"crc\" DevicePath \"\"" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.769357 4735 scope.go:117] "RemoveContainer" containerID="2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.832683 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "988570cf-cf0a-4b92-abe2-9661ac089142" (UID: "988570cf-cf0a-4b92-abe2-9661ac089142"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:05:38 crc kubenswrapper[4735]: I0131 16:05:38.859604 4735 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/988570cf-cf0a-4b92-abe2-9661ac089142-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 16:05:39 crc kubenswrapper[4735]: I0131 16:05:39.242069 4735 scope.go:117] "RemoveContainer" containerID="3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d" Jan 31 16:05:39 crc kubenswrapper[4735]: E0131 16:05:39.242904 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d\": container with ID starting with 3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d not found: ID does not exist" containerID="3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d" Jan 31 16:05:39 crc kubenswrapper[4735]: I0131 16:05:39.242942 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d"} err="failed to get container status \"3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d\": rpc error: code = NotFound desc = could not find container \"3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d\": container with ID starting with 3b7ce41aeea1b6b986749c2a7c287b9f9a594d92176370ac8d9eec0ff66ae54d not found: ID does not exist" Jan 31 16:05:39 crc kubenswrapper[4735]: I0131 16:05:39.242974 4735 scope.go:117] "RemoveContainer" containerID="2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9" Jan 31 16:05:39 crc kubenswrapper[4735]: E0131 16:05:39.243634 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9\": container with ID starting with 2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9 not found: ID does not exist" containerID="2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9" Jan 31 16:05:39 crc kubenswrapper[4735]: I0131 16:05:39.243682 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9"} err="failed to get container status \"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9\": rpc error: code = NotFound desc = could not find container \"2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9\": container with ID starting with 2dee583877862d313f4b3b5a07f2da436a9398ce266476bce8de76ac2eb204b9 not found: ID does not exist" Jan 31 16:05:39 crc kubenswrapper[4735]: I0131 16:05:39.585733 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" path="/var/lib/kubelet/pods/988570cf-cf0a-4b92-abe2-9661ac089142/volumes" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.084102 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:06 crc kubenswrapper[4735]: E0131 16:06:06.085298 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="extract-utilities" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085322 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="extract-utilities" Jan 31 16:06:06 crc kubenswrapper[4735]: E0131 16:06:06.085358 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="gather" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085368 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="gather" Jan 31 16:06:06 crc kubenswrapper[4735]: E0131 16:06:06.085388 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="copy" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085398 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="copy" Jan 31 16:06:06 crc kubenswrapper[4735]: E0131 16:06:06.085446 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="extract-content" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085460 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="extract-content" Jan 31 16:06:06 crc kubenswrapper[4735]: E0131 16:06:06.085495 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="registry-server" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085507 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="registry-server" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085771 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10fa8b2-7499-4ba4-84aa-6c9322956dfb" containerName="registry-server" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085816 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="copy" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.085827 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="988570cf-cf0a-4b92-abe2-9661ac089142" containerName="gather" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.087698 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.099129 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.217466 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.217664 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.217963 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwft7\" (UniqueName: \"kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.320208 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwft7\" (UniqueName: \"kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.320338 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.320443 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.321014 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.321092 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:06 crc kubenswrapper[4735]: I0131 16:06:06.790527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwft7\" (UniqueName: \"kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7\") pod \"redhat-operators-bqhkl\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:07 crc kubenswrapper[4735]: I0131 16:06:07.010750 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:07 crc kubenswrapper[4735]: I0131 16:06:07.346624 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:06:07 crc kubenswrapper[4735]: I0131 16:06:07.346916 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:06:07 crc kubenswrapper[4735]: I0131 16:06:07.502663 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:08 crc kubenswrapper[4735]: I0131 16:06:08.085728 4735 generic.go:334] "Generic (PLEG): container finished" podID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerID="89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697" exitCode=0 Jan 31 16:06:08 crc kubenswrapper[4735]: I0131 16:06:08.085811 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerDied","Data":"89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697"} Jan 31 16:06:08 crc kubenswrapper[4735]: I0131 16:06:08.085852 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerStarted","Data":"3bc5a2c44f3447e76d0858005667129b5662a4029849fd51dd83102476c420b7"} Jan 31 16:06:08 crc kubenswrapper[4735]: I0131 16:06:08.094550 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 16:06:09 crc kubenswrapper[4735]: I0131 16:06:09.101180 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerStarted","Data":"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c"} Jan 31 16:06:10 crc kubenswrapper[4735]: I0131 16:06:10.115404 4735 generic.go:334] "Generic (PLEG): container finished" podID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerID="a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c" exitCode=0 Jan 31 16:06:10 crc kubenswrapper[4735]: I0131 16:06:10.115496 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerDied","Data":"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c"} Jan 31 16:06:12 crc kubenswrapper[4735]: I0131 16:06:12.137791 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerStarted","Data":"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef"} Jan 31 16:06:12 crc kubenswrapper[4735]: I0131 16:06:12.173431 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bqhkl" podStartSLOduration=3.289054539 podStartE2EDuration="6.173400743s" podCreationTimestamp="2026-01-31 16:06:06 +0000 UTC" firstStartedPulling="2026-01-31 16:06:08.09405254 +0000 UTC m=+4053.867381592" lastFinishedPulling="2026-01-31 16:06:10.978398714 +0000 UTC m=+4056.751727796" observedRunningTime="2026-01-31 16:06:12.169345509 +0000 UTC m=+4057.942674631" watchObservedRunningTime="2026-01-31 16:06:12.173400743 +0000 UTC m=+4057.946729795" Jan 31 16:06:17 crc kubenswrapper[4735]: I0131 16:06:17.011471 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:17 crc kubenswrapper[4735]: I0131 16:06:17.012070 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:18 crc kubenswrapper[4735]: I0131 16:06:18.081836 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bqhkl" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="registry-server" probeResult="failure" output=< Jan 31 16:06:18 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 31 16:06:18 crc kubenswrapper[4735]: > Jan 31 16:06:27 crc kubenswrapper[4735]: I0131 16:06:27.080715 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:27 crc kubenswrapper[4735]: I0131 16:06:27.149077 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:27 crc kubenswrapper[4735]: I0131 16:06:27.338642 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.364958 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bqhkl" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="registry-server" containerID="cri-o://900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef" gracePeriod=2 Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.813718 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.936333 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwft7\" (UniqueName: \"kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7\") pod \"6432f04d-6ceb-435f-8d36-184ed2bb8726\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.936591 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content\") pod \"6432f04d-6ceb-435f-8d36-184ed2bb8726\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.936625 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities\") pod \"6432f04d-6ceb-435f-8d36-184ed2bb8726\" (UID: \"6432f04d-6ceb-435f-8d36-184ed2bb8726\") " Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.938161 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities" (OuterVolumeSpecName: "utilities") pod "6432f04d-6ceb-435f-8d36-184ed2bb8726" (UID: "6432f04d-6ceb-435f-8d36-184ed2bb8726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:06:28 crc kubenswrapper[4735]: I0131 16:06:28.946604 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7" (OuterVolumeSpecName: "kube-api-access-cwft7") pod "6432f04d-6ceb-435f-8d36-184ed2bb8726" (UID: "6432f04d-6ceb-435f-8d36-184ed2bb8726"). InnerVolumeSpecName "kube-api-access-cwft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.038749 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwft7\" (UniqueName: \"kubernetes.io/projected/6432f04d-6ceb-435f-8d36-184ed2bb8726-kube-api-access-cwft7\") on node \"crc\" DevicePath \"\"" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.038786 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.122200 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6432f04d-6ceb-435f-8d36-184ed2bb8726" (UID: "6432f04d-6ceb-435f-8d36-184ed2bb8726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.140533 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6432f04d-6ceb-435f-8d36-184ed2bb8726-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.382126 4735 generic.go:334] "Generic (PLEG): container finished" podID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerID="900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef" exitCode=0 Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.382210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerDied","Data":"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef"} Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.382262 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqhkl" event={"ID":"6432f04d-6ceb-435f-8d36-184ed2bb8726","Type":"ContainerDied","Data":"3bc5a2c44f3447e76d0858005667129b5662a4029849fd51dd83102476c420b7"} Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.382296 4735 scope.go:117] "RemoveContainer" containerID="900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.382311 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqhkl" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.417194 4735 scope.go:117] "RemoveContainer" containerID="a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.450974 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.463341 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bqhkl"] Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.467850 4735 scope.go:117] "RemoveContainer" containerID="89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.531853 4735 scope.go:117] "RemoveContainer" containerID="900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef" Jan 31 16:06:29 crc kubenswrapper[4735]: E0131 16:06:29.532369 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef\": container with ID starting with 900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef not found: ID does not exist" containerID="900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.532403 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef"} err="failed to get container status \"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef\": rpc error: code = NotFound desc = could not find container \"900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef\": container with ID starting with 900e6827b199b92e8922cca277c5e3fb2cb2bdf29805fc7de1a702b0185e91ef not found: ID does not exist" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.532459 4735 scope.go:117] "RemoveContainer" containerID="a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c" Jan 31 16:06:29 crc kubenswrapper[4735]: E0131 16:06:29.532929 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c\": container with ID starting with a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c not found: ID does not exist" containerID="a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.532956 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c"} err="failed to get container status \"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c\": rpc error: code = NotFound desc = could not find container \"a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c\": container with ID starting with a3822d8735b5d86d93833dc5353314be7fdd203bce3e2421bce081fc300b969c not found: ID does not exist" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.532974 4735 scope.go:117] "RemoveContainer" containerID="89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697" Jan 31 16:06:29 crc kubenswrapper[4735]: E0131 16:06:29.534023 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697\": container with ID starting with 89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697 not found: ID does not exist" containerID="89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.534050 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697"} err="failed to get container status \"89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697\": rpc error: code = NotFound desc = could not find container \"89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697\": container with ID starting with 89873ea5a87fc04b5e0df2347ffc3250bf72c00fd41c2613924bc41b66568697 not found: ID does not exist" Jan 31 16:06:29 crc kubenswrapper[4735]: I0131 16:06:29.555147 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" path="/var/lib/kubelet/pods/6432f04d-6ceb-435f-8d36-184ed2bb8726/volumes" Jan 31 16:06:37 crc kubenswrapper[4735]: I0131 16:06:37.345931 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:06:37 crc kubenswrapper[4735]: I0131 16:06:37.346719 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.345912 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.346954 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.347042 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.348322 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ab98c14ed86b153bf8199d581ef19375cfd78ddd48d4bb06f4b572d0b5af3ff"} pod="openshift-machine-config-operator/machine-config-daemon-gq77t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.348478 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" containerID="cri-o://0ab98c14ed86b153bf8199d581ef19375cfd78ddd48d4bb06f4b572d0b5af3ff" gracePeriod=600 Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.876829 4735 generic.go:334] "Generic (PLEG): container finished" podID="582442e0-b079-476d-849d-a4902306aba0" containerID="0ab98c14ed86b153bf8199d581ef19375cfd78ddd48d4bb06f4b572d0b5af3ff" exitCode=0 Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.876942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerDied","Data":"0ab98c14ed86b153bf8199d581ef19375cfd78ddd48d4bb06f4b572d0b5af3ff"} Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.877222 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" event={"ID":"582442e0-b079-476d-849d-a4902306aba0","Type":"ContainerStarted","Data":"372a8cf9f2503df9493a89e5e36874d32340bc66b87028f224212530ca275fe9"} Jan 31 16:07:07 crc kubenswrapper[4735]: I0131 16:07:07.877253 4735 scope.go:117] "RemoveContainer" containerID="15868eac63c1cc2984859aa6b8de8afa49d7690a2c01a331b3e62ebe8ae1ad87" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.122794 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:23 crc kubenswrapper[4735]: E0131 16:07:23.123682 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="extract-content" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.123698 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="extract-content" Jan 31 16:07:23 crc kubenswrapper[4735]: E0131 16:07:23.123716 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="registry-server" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.123726 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="registry-server" Jan 31 16:07:23 crc kubenswrapper[4735]: E0131 16:07:23.123752 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="extract-utilities" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.123759 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="extract-utilities" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.123980 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="6432f04d-6ceb-435f-8d36-184ed2bb8726" containerName="registry-server" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.126441 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.154562 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.190980 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ds97\" (UniqueName: \"kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.191121 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.191162 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.293512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ds97\" (UniqueName: \"kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.293657 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.293696 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.294271 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.294450 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.322000 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ds97\" (UniqueName: \"kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97\") pod \"certified-operators-hqt2q\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:23 crc kubenswrapper[4735]: I0131 16:07:23.448385 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:24 crc kubenswrapper[4735]: I0131 16:07:23.901339 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:24 crc kubenswrapper[4735]: I0131 16:07:24.081872 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerStarted","Data":"b23a6c74a5121a4ff9466ca93d6b0c589a5769e8ea2d4fde423bd6ebdc6dd513"} Jan 31 16:07:25 crc kubenswrapper[4735]: I0131 16:07:25.098497 4735 generic.go:334] "Generic (PLEG): container finished" podID="1fd03a17-5f60-4442-b7cd-24258affab63" containerID="f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a" exitCode=0 Jan 31 16:07:25 crc kubenswrapper[4735]: I0131 16:07:25.098600 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerDied","Data":"f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a"} Jan 31 16:07:26 crc kubenswrapper[4735]: I0131 16:07:26.121836 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerStarted","Data":"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb"} Jan 31 16:07:27 crc kubenswrapper[4735]: I0131 16:07:27.136030 4735 generic.go:334] "Generic (PLEG): container finished" podID="1fd03a17-5f60-4442-b7cd-24258affab63" containerID="7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb" exitCode=0 Jan 31 16:07:27 crc kubenswrapper[4735]: I0131 16:07:27.136139 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerDied","Data":"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb"} Jan 31 16:07:29 crc kubenswrapper[4735]: I0131 16:07:29.161382 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerStarted","Data":"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a"} Jan 31 16:07:29 crc kubenswrapper[4735]: I0131 16:07:29.181578 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqt2q" podStartSLOduration=3.739792198 podStartE2EDuration="6.181555836s" podCreationTimestamp="2026-01-31 16:07:23 +0000 UTC" firstStartedPulling="2026-01-31 16:07:25.103771439 +0000 UTC m=+4130.877100521" lastFinishedPulling="2026-01-31 16:07:27.545535107 +0000 UTC m=+4133.318864159" observedRunningTime="2026-01-31 16:07:29.17886764 +0000 UTC m=+4134.952196722" watchObservedRunningTime="2026-01-31 16:07:29.181555836 +0000 UTC m=+4134.954884888" Jan 31 16:07:33 crc kubenswrapper[4735]: I0131 16:07:33.448610 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:33 crc kubenswrapper[4735]: I0131 16:07:33.449104 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:33 crc kubenswrapper[4735]: I0131 16:07:33.516495 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:34 crc kubenswrapper[4735]: I0131 16:07:34.267018 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:34 crc kubenswrapper[4735]: I0131 16:07:34.331389 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.234185 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hqt2q" podUID="1fd03a17-5f60-4442-b7cd-24258affab63" containerName="registry-server" containerID="cri-o://a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a" gracePeriod=2 Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.860849 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.903370 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ds97\" (UniqueName: \"kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97\") pod \"1fd03a17-5f60-4442-b7cd-24258affab63\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.903507 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities\") pod \"1fd03a17-5f60-4442-b7cd-24258affab63\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.903571 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content\") pod \"1fd03a17-5f60-4442-b7cd-24258affab63\" (UID: \"1fd03a17-5f60-4442-b7cd-24258affab63\") " Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.904679 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities" (OuterVolumeSpecName: "utilities") pod "1fd03a17-5f60-4442-b7cd-24258affab63" (UID: "1fd03a17-5f60-4442-b7cd-24258affab63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.912492 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97" (OuterVolumeSpecName: "kube-api-access-4ds97") pod "1fd03a17-5f60-4442-b7cd-24258affab63" (UID: "1fd03a17-5f60-4442-b7cd-24258affab63"). InnerVolumeSpecName "kube-api-access-4ds97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 16:07:36 crc kubenswrapper[4735]: I0131 16:07:36.973803 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fd03a17-5f60-4442-b7cd-24258affab63" (UID: "1fd03a17-5f60-4442-b7cd-24258affab63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.006315 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ds97\" (UniqueName: \"kubernetes.io/projected/1fd03a17-5f60-4442-b7cd-24258affab63-kube-api-access-4ds97\") on node \"crc\" DevicePath \"\"" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.006344 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.006353 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd03a17-5f60-4442-b7cd-24258affab63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.249462 4735 generic.go:334] "Generic (PLEG): container finished" podID="1fd03a17-5f60-4442-b7cd-24258affab63" containerID="a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a" exitCode=0 Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.249608 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqt2q" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.249623 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerDied","Data":"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a"} Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.251128 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqt2q" event={"ID":"1fd03a17-5f60-4442-b7cd-24258affab63","Type":"ContainerDied","Data":"b23a6c74a5121a4ff9466ca93d6b0c589a5769e8ea2d4fde423bd6ebdc6dd513"} Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.251183 4735 scope.go:117] "RemoveContainer" containerID="a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.295916 4735 scope.go:117] "RemoveContainer" containerID="7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.315183 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.326604 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hqt2q"] Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.352302 4735 scope.go:117] "RemoveContainer" containerID="f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.398173 4735 scope.go:117] "RemoveContainer" containerID="a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a" Jan 31 16:07:37 crc kubenswrapper[4735]: E0131 16:07:37.398727 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a\": container with ID starting with a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a not found: ID does not exist" containerID="a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.398796 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a"} err="failed to get container status \"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a\": rpc error: code = NotFound desc = could not find container \"a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a\": container with ID starting with a9feeaa081dd802a6a4a2d4819e9525bfd42b5e36b0fb55aa40d324e8d18615a not found: ID does not exist" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.398840 4735 scope.go:117] "RemoveContainer" containerID="7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb" Jan 31 16:07:37 crc kubenswrapper[4735]: E0131 16:07:37.399361 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb\": container with ID starting with 7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb not found: ID does not exist" containerID="7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.399417 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb"} err="failed to get container status \"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb\": rpc error: code = NotFound desc = could not find container \"7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb\": container with ID starting with 7333278fa3a15c4d5d0ff61a7b7c3a5bf1294aad8c227524c9e42d2dad4e81eb not found: ID does not exist" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.399478 4735 scope.go:117] "RemoveContainer" containerID="f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a" Jan 31 16:07:37 crc kubenswrapper[4735]: E0131 16:07:37.399809 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a\": container with ID starting with f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a not found: ID does not exist" containerID="f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.399850 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a"} err="failed to get container status \"f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a\": rpc error: code = NotFound desc = could not find container \"f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a\": container with ID starting with f78001a79e52c54b707964f2de028d4dc7227c4017f54347b48956edee94c27a not found: ID does not exist" Jan 31 16:07:37 crc kubenswrapper[4735]: I0131 16:07:37.558473 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd03a17-5f60-4442-b7cd-24258affab63" path="/var/lib/kubelet/pods/1fd03a17-5f60-4442-b7cd-24258affab63/volumes" Jan 31 16:09:07 crc kubenswrapper[4735]: I0131 16:09:07.345757 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:09:07 crc kubenswrapper[4735]: I0131 16:09:07.346769 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 16:09:37 crc kubenswrapper[4735]: I0131 16:09:37.346291 4735 patch_prober.go:28] interesting pod/machine-config-daemon-gq77t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 16:09:37 crc kubenswrapper[4735]: I0131 16:09:37.347166 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gq77t" podUID="582442e0-b079-476d-849d-a4902306aba0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137424516024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137424516017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137413700016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137413700015456 5ustar corecore